Test Report: Docker_Linux_crio_arm64 21924

                    
                      af8f7912417d9ebc8a76a18bcb87417cd1a63b57:2025-11-19:42387
                    
                

Test fail (36/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.56
35 TestAddons/parallel/Registry 14.95
36 TestAddons/parallel/RegistryCreds 0.49
37 TestAddons/parallel/Ingress 144.36
38 TestAddons/parallel/InspektorGadget 6.28
39 TestAddons/parallel/MetricsServer 6.45
41 TestAddons/parallel/CSI 39.84
42 TestAddons/parallel/Headlamp 3.56
43 TestAddons/parallel/CloudSpanner 5.31
44 TestAddons/parallel/LocalPath 19.41
45 TestAddons/parallel/NvidiaDevicePlugin 6.3
46 TestAddons/parallel/Yakd 6.36
97 TestFunctional/parallel/ServiceCmdConnect 603.41
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.89
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
135 TestFunctional/parallel/ServiceCmd/Format 0.58
136 TestFunctional/parallel/ServiceCmd/URL 0.6
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.78
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.98
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.4
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.41
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.3
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
191 TestJSONOutput/pause/Command 2.22
197 TestJSONOutput/unpause/Command 1.76
282 TestPause/serial/Pause 6.99
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.43
304 TestStartStop/group/old-k8s-version/serial/Pause 6.28
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.61
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.35
322 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.24
328 TestStartStop/group/embed-certs/serial/Pause 7.66
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.96
340 TestStartStop/group/newest-cni/serial/Pause 6.03
341 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.13
349 TestStartStop/group/no-preload/serial/Pause 7.13
x
+
TestAddons/serial/Volcano (0.56s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable volcano --alsologtostderr -v=1: exit status 11 (559.039003ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:00:28.016959 1472134 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:00:28.017806 1472134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:00:28.017869 1472134 out.go:374] Setting ErrFile to fd 2...
	I1119 02:00:28.017889 1472134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:00:28.018228 1472134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:00:28.018640 1472134 mustload.go:66] Loading cluster: addons-238225
	I1119 02:00:28.019126 1472134 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:00:28.019175 1472134 addons.go:607] checking whether the cluster is paused
	I1119 02:00:28.019376 1472134 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:00:28.019448 1472134 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:00:28.020027 1472134 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:00:28.053602 1472134 ssh_runner.go:195] Run: systemctl --version
	I1119 02:00:28.053683 1472134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:00:28.072234 1472134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:00:28.179057 1472134 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:00:28.179150 1472134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:00:28.220192 1472134 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:00:28.220211 1472134 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:00:28.220215 1472134 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:00:28.220219 1472134 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:00:28.220223 1472134 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:00:28.220227 1472134 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:00:28.220259 1472134 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:00:28.220263 1472134 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:00:28.220266 1472134 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:00:28.220274 1472134 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:00:28.220280 1472134 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:00:28.220283 1472134 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:00:28.220286 1472134 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:00:28.220289 1472134 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:00:28.220293 1472134 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:00:28.220302 1472134 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:00:28.220310 1472134 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:00:28.220327 1472134 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:00:28.220332 1472134 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:00:28.220340 1472134 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:00:28.220345 1472134 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:00:28.220349 1472134 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:00:28.220352 1472134 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:00:28.220355 1472134 cri.go:89] found id: ""
	I1119 02:00:28.220419 1472134 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:00:28.236894 1472134 out.go:203] 
	W1119 02:00:28.239851 1472134 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:00:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:00:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:00:28.239877 1472134 out.go:285] * 
	* 
	W1119 02:00:28.485570 1472134 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:00:28.488758 1472134 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.855351ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003706865s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003216824s
addons_test.go:392: (dbg) Run:  kubectl --context addons-238225 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-238225 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-238225 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.426644079s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 ip
2025/11/19 02:00:53 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable registry --alsologtostderr -v=1: exit status 11 (267.875256ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:00:53.540822 1472643 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:00:53.542249 1472643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:00:53.542270 1472643 out.go:374] Setting ErrFile to fd 2...
	I1119 02:00:53.542281 1472643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:00:53.542550 1472643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:00:53.542846 1472643 mustload.go:66] Loading cluster: addons-238225
	I1119 02:00:53.543211 1472643 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:00:53.543230 1472643 addons.go:607] checking whether the cluster is paused
	I1119 02:00:53.543330 1472643 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:00:53.543344 1472643 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:00:53.543786 1472643 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:00:53.565494 1472643 ssh_runner.go:195] Run: systemctl --version
	I1119 02:00:53.565595 1472643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:00:53.587149 1472643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:00:53.691928 1472643 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:00:53.692016 1472643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:00:53.721306 1472643 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:00:53.721326 1472643 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:00:53.721331 1472643 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:00:53.721334 1472643 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:00:53.721338 1472643 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:00:53.721342 1472643 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:00:53.721345 1472643 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:00:53.721349 1472643 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:00:53.721352 1472643 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:00:53.721362 1472643 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:00:53.721365 1472643 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:00:53.721369 1472643 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:00:53.721372 1472643 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:00:53.721375 1472643 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:00:53.721378 1472643 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:00:53.721385 1472643 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:00:53.721388 1472643 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:00:53.721393 1472643 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:00:53.721396 1472643 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:00:53.721399 1472643 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:00:53.721404 1472643 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:00:53.721408 1472643 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:00:53.721411 1472643 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:00:53.721414 1472643 cri.go:89] found id: ""
	I1119 02:00:53.721462 1472643 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:00:53.738922 1472643 out.go:203] 
	W1119 02:00:53.742148 1472643 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:00:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:00:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:00:53.742177 1472643 out.go:285] * 
	* 
	W1119 02:00:53.751272 1472643 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:00:53.755422 1472643 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.95s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.199965ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-238225
addons_test.go:332: (dbg) Run:  kubectl --context addons-238225 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (258.011306ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:01:42.747632 1474719 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:01:42.749065 1474719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:42.749117 1474719 out.go:374] Setting ErrFile to fd 2...
	I1119 02:01:42.749138 1474719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:42.749484 1474719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:01:42.749889 1474719 mustload.go:66] Loading cluster: addons-238225
	I1119 02:01:42.750351 1474719 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:42.750390 1474719 addons.go:607] checking whether the cluster is paused
	I1119 02:01:42.750592 1474719 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:42.750625 1474719 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:01:42.751134 1474719 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:01:42.768035 1474719 ssh_runner.go:195] Run: systemctl --version
	I1119 02:01:42.768088 1474719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:01:42.784626 1474719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:01:42.891998 1474719 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:01:42.892088 1474719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:01:42.922762 1474719 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:01:42.922793 1474719 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:01:42.922798 1474719 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:01:42.922803 1474719 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:01:42.922807 1474719 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:01:42.922810 1474719 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:01:42.922838 1474719 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:01:42.922843 1474719 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:01:42.922846 1474719 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:01:42.922853 1474719 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:01:42.922866 1474719 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:01:42.922870 1474719 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:01:42.922873 1474719 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:01:42.922876 1474719 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:01:42.922879 1474719 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:01:42.922884 1474719 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:01:42.922892 1474719 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:01:42.922907 1474719 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:01:42.922913 1474719 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:01:42.922916 1474719 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:01:42.922923 1474719 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:01:42.922928 1474719 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:01:42.922932 1474719 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:01:42.922935 1474719 cri.go:89] found id: ""
	I1119 02:01:42.922991 1474719 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:01:42.938163 1474719 out.go:203] 
	W1119 02:01:42.941091 1474719 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:01:42.941120 1474719 out.go:285] * 
	* 
	W1119 02:01:42.950150 1474719 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:01:42.953005 1474719 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-238225 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-238225 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-238225 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [ce90ee55-0ab8-4329-955a-a7f3592c846d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [ce90ee55-0ab8-4329-955a-a7f3592c846d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003770296s
I1119 02:01:33.275712 1465377 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.673989047s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-238225 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-238225
helpers_test.go:243: (dbg) docker inspect addons-238225:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8",
	        "Created": "2025-11-19T01:58:05.25132883Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1466580,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T01:58:05.312347122Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8/hosts",
	        "LogPath": "/var/lib/docker/containers/bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8/bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8-json.log",
	        "Name": "/addons-238225",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-238225:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-238225",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8",
	                "LowerDir": "/var/lib/docker/overlay2/c3d6f04405a20c13146ce0925cf4f362b7291938412bf7b19c1b899ae703e6ee-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c3d6f04405a20c13146ce0925cf4f362b7291938412bf7b19c1b899ae703e6ee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c3d6f04405a20c13146ce0925cf4f362b7291938412bf7b19c1b899ae703e6ee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c3d6f04405a20c13146ce0925cf4f362b7291938412bf7b19c1b899ae703e6ee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-238225",
	                "Source": "/var/lib/docker/volumes/addons-238225/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-238225",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-238225",
	                "name.minikube.sigs.k8s.io": "addons-238225",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "40172d9f065d4d169eb7efe20c2a1f540a540d918506289c1d5e8c4e2c96efb0",
	            "SandboxKey": "/var/run/docker/netns/40172d9f065d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34614"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34615"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34618"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34616"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34617"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-238225": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:ca:f7:4b:26:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "762382ad93c346744971eeb0989cc075ed25beb2a4ed8d7589e9c787cee67cfe",
	                    "EndpointID": "d7ebfa5485a67620e770e95408df74bf2a1c4a6bf0d5c7b02c95864b61584838",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-238225",
	                        "bb862ec6c86a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-238225 -n addons-238225
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-238225 logs -n 25: (1.396205437s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-772744                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-772744 │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 01:57 UTC │
	│ start   │ --download-only -p binary-mirror-689753 --alsologtostderr --binary-mirror http://127.0.0.1:36283 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-689753   │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │                     │
	│ delete  │ -p binary-mirror-689753                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-689753   │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 01:57 UTC │
	│ addons  │ enable dashboard -p addons-238225                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │                     │
	│ addons  │ disable dashboard -p addons-238225                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │                     │
	│ start   │ -p addons-238225 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 02:00 UTC │
	│ addons  │ addons-238225 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	│ addons  │ addons-238225 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	│ addons  │ addons-238225 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	│ addons  │ addons-238225 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	│ ip      │ addons-238225 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │ 19 Nov 25 02:00 UTC │
	│ addons  │ addons-238225 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	│ addons  │ addons-238225 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	│ addons  │ enable headlamp -p addons-238225 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	│ addons  │ addons-238225 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │                     │
	│ ssh     │ addons-238225 ssh cat /opt/local-path-provisioner/pvc-62679135-f675-42e1-8d98-c37f6ea08626_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │ 19 Nov 25 02:01 UTC │
	│ addons  │ addons-238225 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │                     │
	│ addons  │ addons-238225 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │                     │
	│ addons  │ addons-238225 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │                     │
	│ ssh     │ addons-238225 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │                     │
	│ addons  │ addons-238225 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │                     │
	│ addons  │ addons-238225 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-238225                                                                                                                                                                                                                                                                                                                                                                                           │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │ 19 Nov 25 02:01 UTC │
	│ addons  │ addons-238225 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:01 UTC │                     │
	│ ip      │ addons-238225 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:03 UTC │ 19 Nov 25 02:03 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 01:57:39
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 01:57:39.887628 1466137 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:57:39.887841 1466137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:57:39.887870 1466137 out.go:374] Setting ErrFile to fd 2...
	I1119 01:57:39.887889 1466137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:57:39.888171 1466137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 01:57:39.888671 1466137 out.go:368] Setting JSON to false
	I1119 01:57:39.889558 1466137 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34787,"bootTime":1763482673,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 01:57:39.889653 1466137 start.go:143] virtualization:  
	I1119 01:57:39.893175 1466137 out.go:179] * [addons-238225] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 01:57:39.896260 1466137 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 01:57:39.896340 1466137 notify.go:221] Checking for updates...
	I1119 01:57:39.902134 1466137 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 01:57:39.905102 1466137 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 01:57:39.908043 1466137 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 01:57:39.910916 1466137 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 01:57:39.913780 1466137 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 01:57:39.916781 1466137 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 01:57:39.940446 1466137 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 01:57:39.940584 1466137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:57:40.002115 1466137 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 01:57:39.992786597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 01:57:40.002234 1466137 docker.go:319] overlay module found
	I1119 01:57:40.012168 1466137 out.go:179] * Using the docker driver based on user configuration
	I1119 01:57:40.017606 1466137 start.go:309] selected driver: docker
	I1119 01:57:40.017638 1466137 start.go:930] validating driver "docker" against <nil>
	I1119 01:57:40.017654 1466137 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 01:57:40.018543 1466137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:57:40.080713 1466137 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 01:57:40.071603283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 01:57:40.080872 1466137 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 01:57:40.081111 1466137 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 01:57:40.083887 1466137 out.go:179] * Using Docker driver with root privileges
	I1119 01:57:40.086657 1466137 cni.go:84] Creating CNI manager for ""
	I1119 01:57:40.086729 1466137 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:57:40.086740 1466137 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 01:57:40.086825 1466137 start.go:353] cluster config:
	{Name:addons-238225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-238225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1119 01:57:40.090050 1466137 out.go:179] * Starting "addons-238225" primary control-plane node in "addons-238225" cluster
	I1119 01:57:40.092976 1466137 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 01:57:40.096012 1466137 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 01:57:40.099012 1466137 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:57:40.099053 1466137 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 01:57:40.099106 1466137 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 01:57:40.099118 1466137 cache.go:65] Caching tarball of preloaded images
	I1119 01:57:40.099213 1466137 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 01:57:40.099225 1466137 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 01:57:40.099709 1466137 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/config.json ...
	I1119 01:57:40.099759 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/config.json: {Name:mk0be708edd925bb7df5f8d5c43c2fb624d9f741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:57:40.116328 1466137 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1119 01:57:40.116461 1466137 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1119 01:57:40.116482 1466137 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1119 01:57:40.116487 1466137 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1119 01:57:40.116495 1466137 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1119 01:57:40.116501 1466137 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from local cache
	I1119 01:57:58.298287 1466137 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from cached tarball
	I1119 01:57:58.298328 1466137 cache.go:243] Successfully downloaded all kic artifacts
	I1119 01:57:58.298359 1466137 start.go:360] acquireMachinesLock for addons-238225: {Name:mk62d20918077dda75b87e2eea537d37ef4e35a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 01:57:58.299128 1466137 start.go:364] duration metric: took 745.554µs to acquireMachinesLock for "addons-238225"
	I1119 01:57:58.299177 1466137 start.go:93] Provisioning new machine with config: &{Name:addons-238225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-238225 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 01:57:58.299257 1466137 start.go:125] createHost starting for "" (driver="docker")
	I1119 01:57:58.302652 1466137 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1119 01:57:58.302892 1466137 start.go:159] libmachine.API.Create for "addons-238225" (driver="docker")
	I1119 01:57:58.302939 1466137 client.go:173] LocalClient.Create starting
	I1119 01:57:58.303042 1466137 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem
	I1119 01:57:58.655933 1466137 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem
	I1119 01:57:58.731765 1466137 cli_runner.go:164] Run: docker network inspect addons-238225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 01:57:58.747475 1466137 cli_runner.go:211] docker network inspect addons-238225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 01:57:58.747567 1466137 network_create.go:284] running [docker network inspect addons-238225] to gather additional debugging logs...
	I1119 01:57:58.747589 1466137 cli_runner.go:164] Run: docker network inspect addons-238225
	W1119 01:57:58.762882 1466137 cli_runner.go:211] docker network inspect addons-238225 returned with exit code 1
	I1119 01:57:58.762914 1466137 network_create.go:287] error running [docker network inspect addons-238225]: docker network inspect addons-238225: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-238225 not found
	I1119 01:57:58.762940 1466137 network_create.go:289] output of [docker network inspect addons-238225]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-238225 not found
	
	** /stderr **
	I1119 01:57:58.763036 1466137 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 01:57:58.778718 1466137 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400192fba0}
	I1119 01:57:58.778756 1466137 network_create.go:124] attempt to create docker network addons-238225 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1119 01:57:58.778814 1466137 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-238225 addons-238225
	I1119 01:57:58.832923 1466137 network_create.go:108] docker network addons-238225 192.168.49.0/24 created
	I1119 01:57:58.832956 1466137 kic.go:121] calculated static IP "192.168.49.2" for the "addons-238225" container
	I1119 01:57:58.833046 1466137 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 01:57:58.850182 1466137 cli_runner.go:164] Run: docker volume create addons-238225 --label name.minikube.sigs.k8s.io=addons-238225 --label created_by.minikube.sigs.k8s.io=true
	I1119 01:57:58.869962 1466137 oci.go:103] Successfully created a docker volume addons-238225
	I1119 01:57:58.870053 1466137 cli_runner.go:164] Run: docker run --rm --name addons-238225-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-238225 --entrypoint /usr/bin/test -v addons-238225:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 01:58:00.761991 1466137 cli_runner.go:217] Completed: docker run --rm --name addons-238225-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-238225 --entrypoint /usr/bin/test -v addons-238225:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib: (1.891899795s)
	I1119 01:58:00.762019 1466137 oci.go:107] Successfully prepared a docker volume addons-238225
	I1119 01:58:00.762078 1466137 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:58:00.762092 1466137 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 01:58:00.762165 1466137 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-238225:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 01:58:05.178333 1466137 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-238225:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.416129642s)
	I1119 01:58:05.178366 1466137 kic.go:203] duration metric: took 4.416270045s to extract preloaded images to volume ...
	W1119 01:58:05.178496 1466137 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 01:58:05.178607 1466137 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 01:58:05.236744 1466137 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-238225 --name addons-238225 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-238225 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-238225 --network addons-238225 --ip 192.168.49.2 --volume addons-238225:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 01:58:05.550352 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Running}}
	I1119 01:58:05.571557 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:05.593696 1466137 cli_runner.go:164] Run: docker exec addons-238225 stat /var/lib/dpkg/alternatives/iptables
	I1119 01:58:05.648811 1466137 oci.go:144] the created container "addons-238225" has a running status.
	I1119 01:58:05.648841 1466137 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa...
	I1119 01:58:05.757849 1466137 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 01:58:05.780141 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:05.802052 1466137 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 01:58:05.802073 1466137 kic_runner.go:114] Args: [docker exec --privileged addons-238225 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 01:58:05.858091 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:05.887254 1466137 machine.go:94] provisionDockerMachine start ...
	I1119 01:58:05.887357 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:05.919833 1466137 main.go:143] libmachine: Using SSH client type: native
	I1119 01:58:05.920152 1466137 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34614 <nil> <nil>}
	I1119 01:58:05.920167 1466137 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 01:58:05.920817 1466137 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 01:58:09.061005 1466137 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-238225
	
	I1119 01:58:09.061085 1466137 ubuntu.go:182] provisioning hostname "addons-238225"
	I1119 01:58:09.061169 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:09.078223 1466137 main.go:143] libmachine: Using SSH client type: native
	I1119 01:58:09.078537 1466137 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34614 <nil> <nil>}
	I1119 01:58:09.078554 1466137 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-238225 && echo "addons-238225" | sudo tee /etc/hostname
	I1119 01:58:09.226025 1466137 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-238225
	
	I1119 01:58:09.226120 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:09.243606 1466137 main.go:143] libmachine: Using SSH client type: native
	I1119 01:58:09.243926 1466137 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34614 <nil> <nil>}
	I1119 01:58:09.243952 1466137 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-238225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-238225/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-238225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 01:58:09.389414 1466137 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 01:58:09.389448 1466137 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 01:58:09.389473 1466137 ubuntu.go:190] setting up certificates
	I1119 01:58:09.389483 1466137 provision.go:84] configureAuth start
	I1119 01:58:09.389562 1466137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-238225
	I1119 01:58:09.405592 1466137 provision.go:143] copyHostCerts
	I1119 01:58:09.405675 1466137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 01:58:09.405808 1466137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 01:58:09.405884 1466137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 01:58:09.405944 1466137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.addons-238225 san=[127.0.0.1 192.168.49.2 addons-238225 localhost minikube]
	I1119 01:58:09.667984 1466137 provision.go:177] copyRemoteCerts
	I1119 01:58:09.668054 1466137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 01:58:09.668095 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:09.686289 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:09.790013 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 01:58:09.806670 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 01:58:09.823790 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 01:58:09.840730 1466137 provision.go:87] duration metric: took 451.230783ms to configureAuth
	I1119 01:58:09.840754 1466137 ubuntu.go:206] setting minikube options for container-runtime
	I1119 01:58:09.840974 1466137 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:09.841090 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:09.857991 1466137 main.go:143] libmachine: Using SSH client type: native
	I1119 01:58:09.858326 1466137 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34614 <nil> <nil>}
	I1119 01:58:09.858346 1466137 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 01:58:10.152667 1466137 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 01:58:10.152691 1466137 machine.go:97] duration metric: took 4.26541208s to provisionDockerMachine
	I1119 01:58:10.152701 1466137 client.go:176] duration metric: took 11.849752219s to LocalClient.Create
	I1119 01:58:10.152718 1466137 start.go:167] duration metric: took 11.849822945s to libmachine.API.Create "addons-238225"
	I1119 01:58:10.152728 1466137 start.go:293] postStartSetup for "addons-238225" (driver="docker")
	I1119 01:58:10.152742 1466137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 01:58:10.152805 1466137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 01:58:10.152851 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:10.172016 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:10.272983 1466137 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 01:58:10.275945 1466137 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 01:58:10.275970 1466137 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 01:58:10.275981 1466137 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 01:58:10.276042 1466137 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 01:58:10.276070 1466137 start.go:296] duration metric: took 123.333104ms for postStartSetup
	I1119 01:58:10.276417 1466137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-238225
	I1119 01:58:10.292043 1466137 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/config.json ...
	I1119 01:58:10.292315 1466137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 01:58:10.292367 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:10.307911 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:10.402329 1466137 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 01:58:10.406715 1466137 start.go:128] duration metric: took 12.107442418s to createHost
	I1119 01:58:10.406741 1466137 start.go:83] releasing machines lock for "addons-238225", held for 12.107596818s
	I1119 01:58:10.406817 1466137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-238225
	I1119 01:58:10.422726 1466137 ssh_runner.go:195] Run: cat /version.json
	I1119 01:58:10.422785 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:10.422788 1466137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 01:58:10.422854 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:10.445665 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:10.447272 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:10.633790 1466137 ssh_runner.go:195] Run: systemctl --version
	I1119 01:58:10.640058 1466137 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 01:58:10.680314 1466137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 01:58:10.684527 1466137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 01:58:10.684619 1466137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 01:58:10.711033 1466137 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 01:58:10.711059 1466137 start.go:496] detecting cgroup driver to use...
	I1119 01:58:10.711125 1466137 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 01:58:10.711199 1466137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 01:58:10.728807 1466137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 01:58:10.742848 1466137 docker.go:218] disabling cri-docker service (if available) ...
	I1119 01:58:10.742919 1466137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 01:58:10.759018 1466137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 01:58:10.776842 1466137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 01:58:10.893379 1466137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 01:58:11.012170 1466137 docker.go:234] disabling docker service ...
	I1119 01:58:11.012241 1466137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 01:58:11.033812 1466137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 01:58:11.046728 1466137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 01:58:11.160940 1466137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 01:58:11.274852 1466137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 01:58:11.286921 1466137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 01:58:11.300834 1466137 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 01:58:11.300942 1466137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.309662 1466137 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 01:58:11.309786 1466137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.318817 1466137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.327369 1466137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.335727 1466137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 01:58:11.343430 1466137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.352009 1466137 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.364713 1466137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.373448 1466137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 01:58:11.381463 1466137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 01:58:11.389037 1466137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:58:11.503634 1466137 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 01:58:11.674864 1466137 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 01:58:11.675018 1466137 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 01:58:11.678819 1466137 start.go:564] Will wait 60s for crictl version
	I1119 01:58:11.678941 1466137 ssh_runner.go:195] Run: which crictl
	I1119 01:58:11.682339 1466137 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 01:58:11.705234 1466137 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 01:58:11.705423 1466137 ssh_runner.go:195] Run: crio --version
	I1119 01:58:11.733082 1466137 ssh_runner.go:195] Run: crio --version
	I1119 01:58:11.764601 1466137 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 01:58:11.767424 1466137 cli_runner.go:164] Run: docker network inspect addons-238225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 01:58:11.781609 1466137 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1119 01:58:11.785317 1466137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 01:58:11.794954 1466137 kubeadm.go:884] updating cluster {Name:addons-238225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-238225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 01:58:11.795081 1466137 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:58:11.795137 1466137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 01:58:11.829128 1466137 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 01:58:11.829148 1466137 crio.go:433] Images already preloaded, skipping extraction
	I1119 01:58:11.829203 1466137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 01:58:11.852992 1466137 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 01:58:11.853014 1466137 cache_images.go:86] Images are preloaded, skipping loading
	I1119 01:58:11.853022 1466137 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1119 01:58:11.853109 1466137 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-238225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-238225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 01:58:11.853196 1466137 ssh_runner.go:195] Run: crio config
	I1119 01:58:11.922557 1466137 cni.go:84] Creating CNI manager for ""
	I1119 01:58:11.922580 1466137 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:58:11.922598 1466137 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 01:58:11.922641 1466137 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-238225 NodeName:addons-238225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 01:58:11.922802 1466137 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-238225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 01:58:11.922916 1466137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 01:58:11.930418 1466137 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 01:58:11.930527 1466137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 01:58:11.938001 1466137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1119 01:58:11.950566 1466137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 01:58:11.963170 1466137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1119 01:58:11.976223 1466137 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1119 01:58:11.979835 1466137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 01:58:11.989395 1466137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:58:12.109291 1466137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 01:58:12.126019 1466137 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225 for IP: 192.168.49.2
	I1119 01:58:12.126042 1466137 certs.go:195] generating shared ca certs ...
	I1119 01:58:12.126059 1466137 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:12.126245 1466137 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 01:58:12.846969 1466137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt ...
	I1119 01:58:12.847002 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt: {Name:mk0c4361aeeaf7c6e5e4fb8de5c4717adb9c2334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:12.847894 1466137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key ...
	I1119 01:58:12.847951 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key: {Name:mkce782e72709e74ea14a8a7ccdc217d1e1d221c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:12.848736 1466137 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 01:58:13.019443 1466137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt ...
	I1119 01:58:13.019472 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt: {Name:mk2eb27b4a9cc79187840dd91a0f84ea78372129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:13.020288 1466137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key ...
	I1119 01:58:13.020300 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key: {Name:mk4f229732877afbb5a1f392429a97effead11d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:13.021047 1466137 certs.go:257] generating profile certs ...
	I1119 01:58:13.021113 1466137 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.key
	I1119 01:58:13.021125 1466137 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt with IP's: []
	I1119 01:58:13.291722 1466137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt ...
	I1119 01:58:13.291762 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: {Name:mk7c6f3478e869402733655745f3c649bc4cf27b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:13.291974 1466137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.key ...
	I1119 01:58:13.291987 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.key: {Name:mk5393eeaae98609485c90bd844759b781e24061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:13.292729 1466137 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.key.d3545e80
	I1119 01:58:13.292753 1466137 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.crt.d3545e80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1119 01:58:13.861856 1466137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.crt.d3545e80 ...
	I1119 01:58:13.861887 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.crt.d3545e80: {Name:mk0e7f3115a319e6424c82313a6ba7ca09e7de62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:13.862074 1466137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.key.d3545e80 ...
	I1119 01:58:13.862089 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.key.d3545e80: {Name:mk802419c573368863efff5022d2830176aeec97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:13.862174 1466137 certs.go:382] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.crt.d3545e80 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.crt
	I1119 01:58:13.862258 1466137 certs.go:386] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.key.d3545e80 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.key
	I1119 01:58:13.862316 1466137 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.key
	I1119 01:58:13.862337 1466137 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.crt with IP's: []
	I1119 01:58:14.087267 1466137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.crt ...
	I1119 01:58:14.087300 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.crt: {Name:mkc267d713810694574eca8f448ad878ddde9de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:14.088139 1466137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.key ...
	I1119 01:58:14.088160 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.key: {Name:mk30ba7bd08f0e5a57ca942eb2c0669db74541ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:14.088377 1466137 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 01:58:14.088427 1466137 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 01:58:14.088456 1466137 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 01:58:14.088497 1466137 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 01:58:14.089092 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 01:58:14.108086 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 01:58:14.127496 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 01:58:14.145713 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 01:58:14.163024 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 01:58:14.180262 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 01:58:14.197519 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 01:58:14.214975 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 01:58:14.231973 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 01:58:14.249394 1466137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 01:58:14.261888 1466137 ssh_runner.go:195] Run: openssl version
	I1119 01:58:14.267860 1466137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 01:58:14.276028 1466137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:58:14.279598 1466137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:58:14.279655 1466137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:58:14.321296 1466137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 01:58:14.329640 1466137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 01:58:14.333151 1466137 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 01:58:14.333222 1466137 kubeadm.go:401] StartCluster: {Name:addons-238225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-238225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 01:58:14.333306 1466137 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:58:14.333364 1466137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:58:14.358947 1466137 cri.go:89] found id: ""
	I1119 01:58:14.359087 1466137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 01:58:14.366703 1466137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 01:58:14.374159 1466137 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 01:58:14.374260 1466137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 01:58:14.381688 1466137 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 01:58:14.381709 1466137 kubeadm.go:158] found existing configuration files:
	
	I1119 01:58:14.381759 1466137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 01:58:14.388970 1466137 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 01:58:14.389036 1466137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 01:58:14.395948 1466137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 01:58:14.403246 1466137 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 01:58:14.403313 1466137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 01:58:14.410353 1466137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 01:58:14.417712 1466137 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 01:58:14.417785 1466137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 01:58:14.424651 1466137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 01:58:14.431797 1466137 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 01:58:14.431910 1466137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 01:58:14.439158 1466137 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 01:58:14.479646 1466137 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 01:58:14.479711 1466137 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 01:58:14.519670 1466137 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 01:58:14.519749 1466137 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 01:58:14.519791 1466137 kubeadm.go:319] OS: Linux
	I1119 01:58:14.519850 1466137 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 01:58:14.519905 1466137 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 01:58:14.519959 1466137 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 01:58:14.520013 1466137 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 01:58:14.520067 1466137 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 01:58:14.520124 1466137 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 01:58:14.520175 1466137 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 01:58:14.520232 1466137 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 01:58:14.520285 1466137 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 01:58:14.613186 1466137 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 01:58:14.613347 1466137 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 01:58:14.613474 1466137 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 01:58:14.620897 1466137 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 01:58:14.627578 1466137 out.go:252]   - Generating certificates and keys ...
	I1119 01:58:14.627677 1466137 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 01:58:14.627751 1466137 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 01:58:15.007429 1466137 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 01:58:15.420693 1466137 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 01:58:16.890857 1466137 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 01:58:17.407029 1466137 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 01:58:17.741418 1466137 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 01:58:17.741720 1466137 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-238225 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 01:58:18.035303 1466137 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 01:58:18.035697 1466137 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-238225 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 01:58:18.379554 1466137 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 01:58:18.690194 1466137 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 01:58:18.917694 1466137 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 01:58:18.918017 1466137 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 01:58:19.047107 1466137 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 01:58:19.467560 1466137 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 01:58:19.876620 1466137 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 01:58:20.588082 1466137 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 01:58:20.656983 1466137 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 01:58:20.657621 1466137 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 01:58:20.662211 1466137 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 01:58:20.665471 1466137 out.go:252]   - Booting up control plane ...
	I1119 01:58:20.665599 1466137 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 01:58:20.665711 1466137 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 01:58:20.666404 1466137 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 01:58:20.681166 1466137 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 01:58:20.681565 1466137 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 01:58:20.688638 1466137 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 01:58:20.689294 1466137 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 01:58:20.689658 1466137 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 01:58:20.818044 1466137 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 01:58:20.818177 1466137 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 01:58:21.822146 1466137 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.004365008s
	I1119 01:58:21.825064 1466137 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 01:58:21.825340 1466137 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1119 01:58:21.825611 1466137 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 01:58:21.825872 1466137 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 01:58:26.937388 1466137 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.110950457s
	I1119 01:58:27.827281 1466137 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001622732s
	I1119 01:58:28.546119 1466137 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.719860441s
	I1119 01:58:28.584147 1466137 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 01:58:28.595199 1466137 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 01:58:28.608938 1466137 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 01:58:28.609151 1466137 kubeadm.go:319] [mark-control-plane] Marking the node addons-238225 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 01:58:28.621120 1466137 kubeadm.go:319] [bootstrap-token] Using token: qew20g.0239fhbjyet3v0oc
	I1119 01:58:28.624150 1466137 out.go:252]   - Configuring RBAC rules ...
	I1119 01:58:28.624278 1466137 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 01:58:28.628392 1466137 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 01:58:28.640715 1466137 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 01:58:28.644549 1466137 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 01:58:28.648548 1466137 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 01:58:28.652371 1466137 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 01:58:28.954233 1466137 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 01:58:29.388273 1466137 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 01:58:29.952893 1466137 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 01:58:29.954218 1466137 kubeadm.go:319] 
	I1119 01:58:29.954314 1466137 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 01:58:29.954323 1466137 kubeadm.go:319] 
	I1119 01:58:29.954405 1466137 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 01:58:29.954412 1466137 kubeadm.go:319] 
	I1119 01:58:29.954457 1466137 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 01:58:29.954535 1466137 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 01:58:29.954611 1466137 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 01:58:29.954625 1466137 kubeadm.go:319] 
	I1119 01:58:29.954688 1466137 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 01:58:29.954694 1466137 kubeadm.go:319] 
	I1119 01:58:29.954744 1466137 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 01:58:29.954749 1466137 kubeadm.go:319] 
	I1119 01:58:29.954803 1466137 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 01:58:29.954882 1466137 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 01:58:29.954958 1466137 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 01:58:29.954963 1466137 kubeadm.go:319] 
	I1119 01:58:29.955051 1466137 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 01:58:29.955131 1466137 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 01:58:29.955136 1466137 kubeadm.go:319] 
	I1119 01:58:29.955224 1466137 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qew20g.0239fhbjyet3v0oc \
	I1119 01:58:29.955332 1466137 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a \
	I1119 01:58:29.955353 1466137 kubeadm.go:319] 	--control-plane 
	I1119 01:58:29.955358 1466137 kubeadm.go:319] 
	I1119 01:58:29.955447 1466137 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 01:58:29.955451 1466137 kubeadm.go:319] 
	I1119 01:58:29.955542 1466137 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qew20g.0239fhbjyet3v0oc \
	I1119 01:58:29.955648 1466137 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a 
	I1119 01:58:29.958186 1466137 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 01:58:29.958419 1466137 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 01:58:29.958528 1466137 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 01:58:29.958543 1466137 cni.go:84] Creating CNI manager for ""
	I1119 01:58:29.958551 1466137 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:58:29.963511 1466137 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 01:58:29.966327 1466137 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 01:58:29.970402 1466137 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 01:58:29.970423 1466137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 01:58:29.982309 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 01:58:30.290424 1466137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 01:58:30.290583 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:30.290661 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-238225 minikube.k8s.io/updated_at=2025_11_19T01_58_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=addons-238225 minikube.k8s.io/primary=true
	I1119 01:58:30.435848 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:30.435909 1466137 ops.go:34] apiserver oom_adj: -16
	I1119 01:58:30.935974 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:31.436347 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:31.936147 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:32.436830 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:32.936309 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:33.436774 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:33.936570 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:34.078063 1466137 kubeadm.go:1114] duration metric: took 3.787523418s to wait for elevateKubeSystemPrivileges
	I1119 01:58:34.078095 1466137 kubeadm.go:403] duration metric: took 19.74488099s to StartCluster
	I1119 01:58:34.078113 1466137 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:34.078254 1466137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 01:58:34.078661 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:34.078871 1466137 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 01:58:34.079035 1466137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 01:58:34.079339 1466137 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:34.079385 1466137 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1119 01:58:34.079467 1466137 addons.go:70] Setting yakd=true in profile "addons-238225"
	I1119 01:58:34.079495 1466137 addons.go:239] Setting addon yakd=true in "addons-238225"
	I1119 01:58:34.079517 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.080055 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.080558 1466137 addons.go:70] Setting metrics-server=true in profile "addons-238225"
	I1119 01:58:34.080583 1466137 addons.go:239] Setting addon metrics-server=true in "addons-238225"
	I1119 01:58:34.080617 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.081049 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.083030 1466137 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-238225"
	I1119 01:58:34.083113 1466137 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-238225"
	I1119 01:58:34.083211 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.083576 1466137 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-238225"
	I1119 01:58:34.083661 1466137 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-238225"
	I1119 01:58:34.083701 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.085319 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.086913 1466137 addons.go:70] Setting registry=true in profile "addons-238225"
	I1119 01:58:34.088913 1466137 addons.go:239] Setting addon registry=true in "addons-238225"
	I1119 01:58:34.088950 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.089420 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.083817 1466137 addons.go:70] Setting cloud-spanner=true in profile "addons-238225"
	I1119 01:58:34.093329 1466137 addons.go:239] Setting addon cloud-spanner=true in "addons-238225"
	I1119 01:58:34.093396 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.094112 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.083826 1466137 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-238225"
	I1119 01:58:34.102090 1466137 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-238225"
	I1119 01:58:34.102136 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.102615 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088471 1466137 addons.go:70] Setting registry-creds=true in profile "addons-238225"
	I1119 01:58:34.103260 1466137 addons.go:239] Setting addon registry-creds=true in "addons-238225"
	I1119 01:58:34.083834 1466137 addons.go:70] Setting gcp-auth=true in profile "addons-238225"
	I1119 01:58:34.103304 1466137 mustload.go:66] Loading cluster: addons-238225
	I1119 01:58:34.083837 1466137 addons.go:70] Setting ingress=true in profile "addons-238225"
	I1119 01:58:34.103371 1466137 addons.go:239] Setting addon ingress=true in "addons-238225"
	I1119 01:58:34.103406 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.103852 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.083831 1466137 addons.go:70] Setting default-storageclass=true in profile "addons-238225"
	I1119 01:58:34.127006 1466137 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-238225"
	I1119 01:58:34.128587 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.083844 1466137 addons.go:70] Setting ingress-dns=true in profile "addons-238225"
	I1119 01:58:34.137695 1466137 addons.go:239] Setting addon ingress-dns=true in "addons-238225"
	I1119 01:58:34.137746 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.138209 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.083848 1466137 addons.go:70] Setting inspektor-gadget=true in profile "addons-238225"
	I1119 01:58:34.148258 1466137 addons.go:239] Setting addon inspektor-gadget=true in "addons-238225"
	I1119 01:58:34.148299 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.148782 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088485 1466137 addons.go:70] Setting storage-provisioner=true in profile "addons-238225"
	I1119 01:58:34.148986 1466137 addons.go:239] Setting addon storage-provisioner=true in "addons-238225"
	I1119 01:58:34.149011 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.149892 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088494 1466137 addons.go:70] Setting volcano=true in profile "addons-238225"
	I1119 01:58:34.161057 1466137 addons.go:239] Setting addon volcano=true in "addons-238225"
	I1119 01:58:34.161108 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.161620 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088490 1466137 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-238225"
	I1119 01:58:34.171292 1466137 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-238225"
	I1119 01:58:34.171649 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088498 1466137 addons.go:70] Setting volumesnapshots=true in profile "addons-238225"
	I1119 01:58:34.175851 1466137 addons.go:239] Setting addon volumesnapshots=true in "addons-238225"
	I1119 01:58:34.175905 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.176365 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088885 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088895 1466137 out.go:179] * Verifying Kubernetes components...
	I1119 01:58:34.203144 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.203636 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.223591 1466137 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:34.223891 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.271323 1466137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:58:34.329536 1466137 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1119 01:58:34.346815 1466137 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1119 01:58:34.350717 1466137 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 01:58:34.350772 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1119 01:58:34.350862 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.356498 1466137 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1119 01:58:34.360344 1466137 out.go:179]   - Using image docker.io/registry:3.0.0
	I1119 01:58:34.367739 1466137 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1119 01:58:34.380775 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1119 01:58:34.386778 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1119 01:58:34.389817 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1119 01:58:34.392929 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1119 01:58:34.395920 1466137 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1119 01:58:34.396256 1466137 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 01:58:34.396277 1466137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 01:58:34.396346 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	W1119 01:58:34.403532 1466137 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1119 01:58:34.406921 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1119 01:58:34.407120 1466137 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1119 01:58:34.407133 1466137 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1119 01:58:34.407205 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.421641 1466137 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1119 01:58:34.421721 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1119 01:58:34.421827 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.429862 1466137 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 01:58:34.433258 1466137 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-238225"
	I1119 01:58:34.433298 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.438025 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.451420 1466137 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:58:34.451562 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1119 01:58:34.456323 1466137 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:58:34.458290 1466137 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1119 01:58:34.460418 1466137 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 01:58:34.460441 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1119 01:58:34.460510 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.466734 1466137 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1119 01:58:34.466761 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1119 01:58:34.481968 1466137 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1119 01:58:34.486911 1466137 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1119 01:58:34.486935 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1119 01:58:34.487009 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.491885 1466137 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 01:58:34.491905 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 01:58:34.491974 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.509711 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.521944 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1119 01:58:34.522361 1466137 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1119 01:58:34.553580 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1119 01:58:34.557233 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1119 01:58:34.560143 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1119 01:58:34.560174 1466137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1119 01:58:34.560273 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.565718 1466137 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1119 01:58:34.574700 1466137 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 01:58:34.574788 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1119 01:58:34.574947 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.595776 1466137 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 01:58:34.595800 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1119 01:58:34.595883 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.610041 1466137 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1119 01:58:34.610070 1466137 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1119 01:58:34.610156 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.610556 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.619430 1466137 addons.go:239] Setting addon default-storageclass=true in "addons-238225"
	I1119 01:58:34.619477 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.619913 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.621459 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.641846 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.660229 1466137 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1119 01:58:34.660283 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.661346 1466137 out.go:179]   - Using image docker.io/busybox:stable
	I1119 01:58:34.693638 1466137 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1119 01:58:34.673856 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.698830 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.674449 1466137 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 01:58:34.699649 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1119 01:58:34.699838 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.701619 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.702063 1466137 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 01:58:34.702074 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1119 01:58:34.702122 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.726930 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.735551 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.745439 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.773902 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.778758 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.797978 1466137 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 01:58:34.797998 1466137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 01:58:34.798058 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.799941 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.837794 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.844430 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.847039 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.902520 1466137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 01:58:34.902721 1466137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 01:58:35.204139 1466137 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1119 01:58:35.204212 1466137 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1119 01:58:35.282417 1466137 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1119 01:58:35.282490 1466137 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1119 01:58:35.287825 1466137 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1119 01:58:35.287847 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1119 01:58:35.307204 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1119 01:58:35.331503 1466137 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1119 01:58:35.331574 1466137 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1119 01:58:35.338465 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1119 01:58:35.340712 1466137 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 01:58:35.340776 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1119 01:58:35.394454 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 01:58:35.402303 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 01:58:35.409662 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1119 01:58:35.424396 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 01:58:35.432458 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 01:58:35.434260 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 01:58:35.434934 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 01:58:35.469164 1466137 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1119 01:58:35.469237 1466137 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1119 01:58:35.507232 1466137 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1119 01:58:35.507309 1466137 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1119 01:58:35.515372 1466137 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 01:58:35.515452 1466137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 01:58:35.519313 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1119 01:58:35.519388 1466137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1119 01:58:35.520176 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 01:58:35.567831 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 01:58:35.601141 1466137 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1119 01:58:35.601215 1466137 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1119 01:58:35.663223 1466137 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1119 01:58:35.663292 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1119 01:58:35.663537 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1119 01:58:35.663570 1466137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1119 01:58:35.708764 1466137 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 01:58:35.708845 1466137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 01:58:35.859058 1466137 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1119 01:58:35.859132 1466137 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1119 01:58:35.865400 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1119 01:58:35.865472 1466137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1119 01:58:35.882002 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1119 01:58:35.952711 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 01:58:36.001653 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1119 01:58:36.001728 1466137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1119 01:58:36.010332 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1119 01:58:36.010417 1466137 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1119 01:58:36.193099 1466137 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:58:36.193172 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1119 01:58:36.238272 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1119 01:58:36.238350 1466137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1119 01:58:36.449743 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:58:36.514613 1466137 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1119 01:58:36.514688 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1119 01:58:36.562006 1466137 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.659251096s)
	I1119 01:58:36.562088 1466137 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1119 01:58:36.562825 1466137 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.660279703s)
	I1119 01:58:36.564115 1466137 node_ready.go:35] waiting up to 6m0s for node "addons-238225" to be "Ready" ...
	I1119 01:58:36.858453 1466137 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1119 01:58:36.858525 1466137 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1119 01:58:37.075321 1466137 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-238225" context rescaled to 1 replicas
	I1119 01:58:37.158010 1466137 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1119 01:58:37.158079 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1119 01:58:37.301276 1466137 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1119 01:58:37.301352 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1119 01:58:37.416798 1466137 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 01:58:37.416876 1466137 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1119 01:58:37.667758 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1119 01:58:38.607366 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:39.223018 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.884468845s)
	I1119 01:58:39.223157 1466137 addons.go:480] Verifying addon registry=true in "addons-238225"
	I1119 01:58:39.223068 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.828540854s)
	I1119 01:58:39.223272 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.916046933s)
	I1119 01:58:39.228275 1466137 out.go:179] * Verifying registry addon...
	I1119 01:58:39.232025 1466137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1119 01:58:39.262709 1466137 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 01:58:39.262785 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:39.746504 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:40.220814 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.818436304s)
	I1119 01:58:40.220897 1466137 addons.go:480] Verifying addon ingress=true in "addons-238225"
	I1119 01:58:40.221108 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.81138061s)
	I1119 01:58:40.221253 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.796795426s)
	I1119 01:58:40.221275 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.788750544s)
	I1119 01:58:40.221291 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.786972985s)
	I1119 01:58:40.221322 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.786337245s)
	I1119 01:58:40.221371 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.701151224s)
	I1119 01:58:40.221414 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.653525773s)
	I1119 01:58:40.221442 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.339379477s)
	I1119 01:58:40.221489 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.268711484s)
	I1119 01:58:40.222239 1466137 addons.go:480] Verifying addon metrics-server=true in "addons-238225"
	I1119 01:58:40.221594 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.771780024s)
	W1119 01:58:40.222266 1466137 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 01:58:40.222298 1466137 retry.go:31] will retry after 282.047195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 01:58:40.224724 1466137 out.go:179] * Verifying ingress addon...
	I1119 01:58:40.226624 1466137 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-238225 service yakd-dashboard -n yakd-dashboard
	
	I1119 01:58:40.230425 1466137 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1119 01:58:40.243725 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:40.243987 1466137 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1119 01:58:40.243996 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:58:40.257296 1466137 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1119 01:58:40.504629 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:58:40.538351 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.87047648s)
	I1119 01:58:40.538383 1466137 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-238225"
	I1119 01:58:40.541410 1466137 out.go:179] * Verifying csi-hostpath-driver addon...
	I1119 01:58:40.544840 1466137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1119 01:58:40.558083 1466137 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 01:58:40.558108 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:40.737470 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:40.738014 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:41.049195 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:41.067868 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:41.234369 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:41.235667 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:41.548649 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:41.736651 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:41.737205 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:42.051737 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:42.239961 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:42.242447 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:42.257756 1466137 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1119 01:58:42.257922 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:42.282737 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:42.399500 1466137 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1119 01:58:42.413385 1466137 addons.go:239] Setting addon gcp-auth=true in "addons-238225"
	I1119 01:58:42.413435 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:42.413911 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:42.431189 1466137 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1119 01:58:42.431245 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:42.448514 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:42.547894 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:42.733289 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:42.735080 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:43.048323 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:43.195257 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.690534368s)
	I1119 01:58:43.198395 1466137 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:58:43.201248 1466137 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1119 01:58:43.203961 1466137 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1119 01:58:43.203979 1466137 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1119 01:58:43.216753 1466137 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1119 01:58:43.216775 1466137 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1119 01:58:43.229623 1466137 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 01:58:43.229643 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1119 01:58:43.234277 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:43.236058 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:43.248794 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 01:58:43.548775 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:43.568113 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:43.719633 1466137 addons.go:480] Verifying addon gcp-auth=true in "addons-238225"
	I1119 01:58:43.722936 1466137 out.go:179] * Verifying gcp-auth addon...
	I1119 01:58:43.727345 1466137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1119 01:58:43.731830 1466137 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1119 01:58:43.731896 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:43.734374 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:43.734933 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:44.048339 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:44.230892 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:44.232823 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:44.234956 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:44.548159 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:44.731057 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:44.733259 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:44.735280 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:45.049487 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:45.231088 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:45.234677 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:45.236687 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:45.548161 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:45.730900 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:45.733251 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:45.734960 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:46.048405 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:46.067060 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:46.231411 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:46.233454 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:46.234945 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:46.548133 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:46.731456 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:46.734984 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:46.735088 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:47.048228 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:47.231300 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:47.233792 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:47.235187 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:47.548446 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:47.730207 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:47.733810 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:47.734441 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:48.048890 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:48.068535 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:48.230526 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:48.232738 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:48.235031 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:48.548650 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:48.730271 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:48.733816 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:48.735160 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:49.048061 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:49.230633 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:49.232796 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:49.234902 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:49.548325 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:49.730617 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:49.732862 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:49.734390 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:50.048219 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:50.231312 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:50.234142 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:50.235068 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:50.547976 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:50.567641 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:50.730568 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:50.733130 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:50.734707 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:51.047783 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:51.230219 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:51.234233 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:51.234832 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:51.548843 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:51.730356 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:51.733790 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:51.734992 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:52.047881 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:52.230642 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:52.233095 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:52.234785 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:52.547736 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:52.730295 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:52.734175 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:52.734570 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:53.048573 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:53.067127 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:53.230994 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:53.233273 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:53.234149 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:53.548292 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:53.730446 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:53.734661 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:53.735142 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:54.048597 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:54.230577 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:54.234186 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:54.235160 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:54.548279 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:54.730438 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:54.734730 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:54.735719 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:55.047930 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:55.067728 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:55.230962 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:55.232707 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:55.234236 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:55.548618 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:55.730294 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:55.734060 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:55.734923 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:56.047724 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:56.230372 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:56.234786 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:56.235107 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:56.548107 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:56.730801 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:56.733080 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:56.735317 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:57.048721 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:57.230241 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:57.233843 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:57.234949 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:57.548593 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:57.567406 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:57.730287 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:57.733897 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:57.735202 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:58.048111 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:58.244038 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:58.244540 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:58.244820 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:58.547881 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:58.730861 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:58.733842 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:58.734590 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:59.048455 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:59.230046 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:59.233389 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:59.235414 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:59.548532 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:59.730294 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:59.734717 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:59.734896 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:00.110905 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:00.111198 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:00.231462 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:00.272599 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:00.273386 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:00.548718 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:00.730789 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:00.732992 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:00.734710 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:01.047703 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:01.231602 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:01.234714 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:01.237439 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:01.550469 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:01.730685 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:01.733188 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:01.735355 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:02.048498 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:02.230396 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:02.234896 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:02.235270 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:02.548351 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:02.567109 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:02.730963 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:02.734141 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:02.734450 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:03.049191 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:03.231023 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:03.233359 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:03.234876 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:03.548331 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:03.732016 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:03.733547 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:03.734364 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:04.048473 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:04.230947 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:04.233255 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:04.234929 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:04.548050 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:04.568001 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:04.731247 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:04.734450 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:04.734511 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:05.047848 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:05.230532 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:05.233113 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:05.234802 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:05.547896 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:05.730544 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:05.733439 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:05.738866 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:06.047914 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:06.230669 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:06.233145 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:06.235097 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:06.547921 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:06.730683 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:06.733131 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:06.734943 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:07.048071 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:07.066891 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:07.230495 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:07.232732 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:07.234261 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:07.548498 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:07.730416 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:07.732998 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:07.734852 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:08.048434 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:08.231034 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:08.233251 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:08.235248 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:08.548576 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:08.730972 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:08.732747 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:08.734510 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:09.048891 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:09.067908 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:09.230565 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:09.233069 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:09.234812 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:09.548113 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:09.730474 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:09.733782 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:09.734647 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:10.050566 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:10.230365 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:10.233478 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:10.235064 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:10.548330 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:10.730408 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:10.733986 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:10.735066 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:11.048066 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:11.231130 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:11.233168 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:11.234700 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:11.547857 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:11.567617 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:11.730491 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:11.732827 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:11.734655 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:12.047593 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:12.230416 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:12.234907 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:12.235848 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:12.548008 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:12.730539 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:12.732852 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:12.734622 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:13.047884 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:13.230459 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:13.232774 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:13.234376 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:13.548549 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:13.729980 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:13.733415 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:13.735110 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:14.047822 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:14.068009 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:14.230845 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:14.233028 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:14.234552 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:14.548387 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:14.731170 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:14.733331 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:14.734848 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:15.048079 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:15.290603 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:15.301057 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:15.302351 1466137 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 01:59:15.302410 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:15.609854 1466137 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 01:59:15.609940 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:15.613469 1466137 node_ready.go:49] node "addons-238225" is "Ready"
	I1119 01:59:15.613578 1466137 node_ready.go:38] duration metric: took 39.049396885s for node "addons-238225" to be "Ready" ...
	I1119 01:59:15.613627 1466137 api_server.go:52] waiting for apiserver process to appear ...
	I1119 01:59:15.613739 1466137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 01:59:15.634647 1466137 api_server.go:72] duration metric: took 41.555720057s to wait for apiserver process to appear ...
	I1119 01:59:15.634722 1466137 api_server.go:88] waiting for apiserver healthz status ...
	I1119 01:59:15.634755 1466137 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1119 01:59:15.647926 1466137 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1119 01:59:15.649282 1466137 api_server.go:141] control plane version: v1.34.1
	I1119 01:59:15.649348 1466137 api_server.go:131] duration metric: took 14.602732ms to wait for apiserver health ...
	I1119 01:59:15.649370 1466137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 01:59:15.653618 1466137 system_pods.go:59] 19 kube-system pods found
	I1119 01:59:15.653706 1466137 system_pods.go:61] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:15.653741 1466137 system_pods.go:61] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:15.653761 1466137 system_pods.go:61] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending
	I1119 01:59:15.653790 1466137 system_pods.go:61] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending
	I1119 01:59:15.653826 1466137 system_pods.go:61] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:15.653858 1466137 system_pods.go:61] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:15.653878 1466137 system_pods.go:61] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:15.653909 1466137 system_pods.go:61] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:15.653931 1466137 system_pods.go:61] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending
	I1119 01:59:15.653967 1466137 system_pods.go:61] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:15.654000 1466137 system_pods.go:61] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:15.654020 1466137 system_pods.go:61] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending
	I1119 01:59:15.654038 1466137 system_pods.go:61] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending
	I1119 01:59:15.654093 1466137 system_pods.go:61] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending
	I1119 01:59:15.654125 1466137 system_pods.go:61] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:15.654166 1466137 system_pods.go:61] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending
	I1119 01:59:15.654196 1466137 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending
	I1119 01:59:15.654216 1466137 system_pods.go:61] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending
	I1119 01:59:15.654247 1466137 system_pods.go:61] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:59:15.654267 1466137 system_pods.go:74] duration metric: took 4.872396ms to wait for pod list to return data ...
	I1119 01:59:15.654306 1466137 default_sa.go:34] waiting for default service account to be created ...
	I1119 01:59:15.663108 1466137 default_sa.go:45] found service account: "default"
	I1119 01:59:15.663175 1466137 default_sa.go:55] duration metric: took 8.83056ms for default service account to be created ...
	I1119 01:59:15.663200 1466137 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 01:59:15.671074 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:15.671112 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:15.671123 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:15.671137 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending
	I1119 01:59:15.671142 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending
	I1119 01:59:15.671146 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:15.671152 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:15.671168 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:15.671173 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:15.671177 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending
	I1119 01:59:15.671190 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:15.671194 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:15.671199 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending
	I1119 01:59:15.671210 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending
	I1119 01:59:15.671218 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending
	I1119 01:59:15.671225 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:15.671229 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending
	I1119 01:59:15.671240 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending
	I1119 01:59:15.671249 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending
	I1119 01:59:15.671255 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:59:15.671282 1466137 retry.go:31] will retry after 221.734559ms: missing components: kube-dns
	I1119 01:59:15.742414 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:15.743664 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:15.744761 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:15.909429 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:15.909471 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:15.909481 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:15.909489 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:59:15.909496 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:59:15.909501 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:15.909544 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:15.909549 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:15.909556 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:15.909562 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:59:15.909567 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:15.909571 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:15.909576 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending
	I1119 01:59:15.909580 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending
	I1119 01:59:15.909584 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending
	I1119 01:59:15.909590 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:15.909594 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending
	I1119 01:59:15.909602 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:15.909609 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:15.909623 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:59:15.909641 1466137 retry.go:31] will retry after 272.31622ms: missing components: kube-dns
	I1119 01:59:16.051352 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:16.189098 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:16.189136 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:16.189147 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:16.189157 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:59:16.189164 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:59:16.189169 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:16.189174 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:16.189179 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:16.189183 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:16.189199 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:59:16.189203 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:16.189208 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:16.189222 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:59:16.189230 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:59:16.189240 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:59:16.189246 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:16.189254 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:59:16.189263 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:16.189271 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:16.189277 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:59:16.189293 1466137 retry.go:31] will retry after 366.562895ms: missing components: kube-dns
	I1119 01:59:16.230212 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:16.234426 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:16.238021 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:16.549021 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:16.567805 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:16.567842 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:16.567852 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:16.567890 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:59:16.567919 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:59:16.567929 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:16.567936 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:16.567940 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:16.568020 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:16.568080 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:59:16.568093 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:16.568114 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:16.568129 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:59:16.568136 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:59:16.568149 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:59:16.568171 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:16.568186 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:59:16.568202 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:16.568217 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:16.568238 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:59:16.568262 1466137 retry.go:31] will retry after 394.336323ms: missing components: kube-dns
	I1119 01:59:16.730846 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:16.733093 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:16.735164 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:16.968776 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:16.968822 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:16.968855 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:16.968873 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:59:16.968891 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:59:16.968902 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:16.968910 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:16.968943 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:16.968959 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:16.968970 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:59:16.968975 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:16.968986 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:16.968996 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:59:16.969017 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:59:16.969036 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:59:16.969049 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:16.969063 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:59:16.969079 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:16.969104 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:16.969117 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Running
	I1119 01:59:16.969151 1466137 retry.go:31] will retry after 601.534725ms: missing components: kube-dns
	I1119 01:59:17.068119 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:17.237128 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:17.237229 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:17.237307 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:17.549867 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:17.576125 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:17.576164 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:17.576173 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:17.576181 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:59:17.576217 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:59:17.576229 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:17.576235 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:17.576241 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:17.576250 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:17.576258 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:59:17.576262 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:17.576288 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:17.576302 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:59:17.576314 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:59:17.576327 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:59:17.576336 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:17.576346 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:59:17.576368 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:17.576385 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:17.576404 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Running
	I1119 01:59:17.576426 1466137 retry.go:31] will retry after 828.771953ms: missing components: kube-dns
	I1119 01:59:17.763007 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:17.763370 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:17.763475 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:18.056215 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:18.230940 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:18.233407 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:18.235401 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:18.409591 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:18.409624 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Running
	I1119 01:59:18.409636 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:18.409643 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:59:18.409673 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:59:18.409699 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:18.409708 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:18.409712 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:18.409719 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:18.409726 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:59:18.409730 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:18.409744 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:18.409751 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:59:18.409758 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:59:18.409768 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:59:18.409777 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:18.409782 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:59:18.409797 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:18.409804 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:18.409810 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Running
	I1119 01:59:18.409820 1466137 system_pods.go:126] duration metric: took 2.746601265s to wait for k8s-apps to be running ...
	I1119 01:59:18.409832 1466137 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 01:59:18.409894 1466137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 01:59:18.422625 1466137 system_svc.go:56] duration metric: took 12.78417ms WaitForService to wait for kubelet
	I1119 01:59:18.422651 1466137 kubeadm.go:587] duration metric: took 44.343735493s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 01:59:18.422691 1466137 node_conditions.go:102] verifying NodePressure condition ...
	I1119 01:59:18.425390 1466137 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 01:59:18.425420 1466137 node_conditions.go:123] node cpu capacity is 2
	I1119 01:59:18.425435 1466137 node_conditions.go:105] duration metric: took 2.725624ms to run NodePressure ...
	I1119 01:59:18.425448 1466137 start.go:242] waiting for startup goroutines ...
	I1119 01:59:18.548772 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:18.730879 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:18.733157 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:18.735656 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:19.049002 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:19.232426 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:19.233801 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:19.234926 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:19.549308 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:19.730480 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:19.732893 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:19.735301 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:20.049624 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:20.231331 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:20.234128 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:20.235046 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:20.548349 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:20.730464 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:20.733470 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:20.735547 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:21.048196 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:21.231458 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:21.234973 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:21.237869 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:21.548644 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:21.731738 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:21.734645 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:21.736160 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:22.048946 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:22.231616 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:22.233662 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:22.236074 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:22.548683 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:22.730901 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:22.733817 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:22.737792 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:23.048443 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:23.230778 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:23.233469 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:23.235623 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:23.548854 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:23.730758 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:23.733280 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:23.735183 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:24.049105 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:24.231457 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:24.234469 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:24.235642 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:24.548144 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:24.732451 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:24.735749 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:24.736194 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:25.049579 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:25.230811 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:25.233540 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:25.235460 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:25.549395 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:25.730323 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:25.736356 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:25.737404 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:26.050374 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:26.231204 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:26.235444 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:26.236401 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:26.548636 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:26.730467 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:26.734042 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:26.736250 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:27.049260 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:27.230504 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:27.234221 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:27.235950 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:27.549251 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:27.730402 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:27.736290 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:27.736784 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:28.049389 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:28.231161 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:28.234690 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:28.236884 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:28.550335 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:28.734513 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:28.736941 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:28.738290 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:29.049647 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:29.234811 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:29.238669 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:29.239639 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:29.552590 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:29.734478 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:29.736981 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:29.738391 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:30.050018 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:30.235006 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:30.235422 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:30.238186 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:30.549879 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:30.741657 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:30.742893 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:30.743006 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:31.048984 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:31.231016 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:31.233230 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:31.234754 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:31.549662 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:31.744683 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:31.744814 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:31.745146 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:32.049414 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:32.235748 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:32.235835 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:32.235989 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:32.549137 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:32.741997 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:32.749980 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:32.752299 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:33.048996 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:33.233422 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:33.242643 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:33.243107 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:33.550275 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:33.732988 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:33.735922 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:33.736522 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:34.048930 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:34.231433 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:34.234803 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:34.236820 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:34.548504 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:34.730967 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:34.733593 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:34.735862 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:35.049220 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:35.231793 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:35.238202 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:35.238783 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:35.548609 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:35.730915 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:35.734605 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:35.736389 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:36.057881 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:36.231145 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:36.235019 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:36.235853 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:36.548125 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:36.731212 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:36.734857 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:36.735012 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:37.048953 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:37.231197 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:37.234563 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:37.235978 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:37.549612 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:37.730324 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:37.733928 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:37.735294 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:38.049749 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:38.231559 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:38.237056 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:38.237429 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:38.549406 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:38.730455 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:38.733201 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:38.735157 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:39.048752 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:39.231212 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:39.239420 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:39.239791 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:39.548288 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:39.730283 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:39.734660 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:39.736197 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:40.050088 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:40.231730 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:40.234367 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:40.236837 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:40.548586 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:40.730547 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:40.732927 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:40.735063 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:41.049005 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:41.231085 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:41.234056 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:41.235427 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:41.549829 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:41.732071 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:41.733494 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:41.735501 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:42.049366 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:42.235531 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:42.255794 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:42.256352 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:42.548655 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:42.730394 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:42.734410 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:42.735631 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:43.048898 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:43.231228 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:43.233874 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:43.236054 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:43.548775 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:43.730950 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:43.734674 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:43.736369 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:44.049257 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:44.230240 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:44.235683 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:44.236248 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:44.548765 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:44.731112 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:44.735279 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:44.735600 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:45.050504 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:45.251021 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:45.252066 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:45.252668 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:45.547967 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:45.731170 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:45.734374 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:45.735606 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:46.048773 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:46.231127 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:46.233425 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:46.234743 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:46.548433 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:46.730617 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:46.734091 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:46.735210 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:47.048722 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:47.231016 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:47.236860 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:47.237394 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:47.549262 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:47.732867 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:47.733704 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:47.734941 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:48.048933 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:48.231410 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:48.234873 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:48.235771 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:48.548119 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:48.730981 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:48.733438 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:48.735195 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:49.048386 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:49.236342 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:49.236591 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:49.236973 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:49.548486 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:49.730411 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:49.735452 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:49.737881 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:50.048893 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:50.230557 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:50.234236 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:50.235811 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:50.548628 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:50.731102 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:50.733230 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:50.735357 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:51.049294 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:51.230027 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:51.233277 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:51.234866 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:51.548453 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:51.730303 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:51.734152 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:51.735778 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:52.049191 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:52.231196 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:52.234627 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:52.235685 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:52.548605 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:52.730724 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:52.732969 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:52.735163 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:53.049609 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:53.230354 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:53.233909 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:53.236097 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:53.548260 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:53.731377 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:53.737583 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:53.740453 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:54.049146 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:54.230853 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:54.233089 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:54.244332 1466137 kapi.go:107] duration metric: took 1m15.012306387s to wait for kubernetes.io/minikube-addons=registry ...
	I1119 01:59:54.550633 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:54.731453 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:54.733651 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:55.048231 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:55.232363 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:55.233713 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:55.548358 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:55.731047 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:55.733264 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:56.050713 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:56.230349 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:56.233934 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:56.548787 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:56.731060 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:56.733320 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:57.048825 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:57.231404 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:57.233614 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:57.548254 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:57.730979 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:57.733150 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:58.049287 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:58.230279 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:58.233700 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:58.549144 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:58.731068 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:58.733036 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:59.048824 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:59.232582 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:59.234606 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:59.548036 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:59.732228 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:59.738081 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:00.073168 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:00.258113 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:00.258834 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:00.554400 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:00.754196 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:00.754355 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:01.056243 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:01.240979 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:01.241298 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:01.549391 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:01.759748 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:01.759884 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:02.054470 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:02.231962 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:02.234915 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:02.551446 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:02.731042 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:02.734571 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:03.049973 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:03.232343 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:03.233828 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:03.548299 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:03.734666 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:03.738204 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:04.086316 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:04.230871 1466137 kapi.go:107] duration metric: took 1m20.503523344s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1119 02:00:04.233385 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:04.234452 1466137 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-238225 cluster.
	I1119 02:00:04.237397 1466137 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1119 02:00:04.240322 1466137 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1119 02:00:04.548634 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:04.734045 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:05.049056 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:05.234501 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:05.556461 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:05.740320 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:06.061247 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:06.234922 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:06.548477 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:06.734297 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:07.054308 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:07.234200 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:07.551851 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:07.734067 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:08.048414 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:08.233332 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:08.549149 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:08.734676 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:09.052919 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:09.234271 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:09.548734 1466137 kapi.go:107] duration metric: took 1m29.00389633s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1119 02:00:09.733719 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:10.234086 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:10.734549 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:11.239264 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:11.733628 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:12.234125 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:12.733958 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:13.233316 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:13.733676 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:14.234058 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:14.734015 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:15.233650 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:15.734239 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:16.234308 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:16.733727 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:17.234424 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:17.733576 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:18.234350 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:18.734354 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:19.234003 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:19.733680 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:20.234103 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:20.733825 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:21.234935 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:21.734629 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:22.234454 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:22.733567 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:23.233819 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:23.733674 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:24.233943 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:24.735087 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:25.233592 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:25.734571 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:26.234828 1466137 kapi.go:107] duration metric: took 1m46.004400513s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1119 02:00:26.237912 1466137 out.go:179] * Enabled addons: ingress-dns, inspektor-gadget, cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1119 02:00:26.241051 1466137 addons.go:515] duration metric: took 1m52.161642314s for enable addons: enabled=[ingress-dns inspektor-gadget cloud-spanner amd-gpu-device-plugin nvidia-device-plugin registry-creds storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1119 02:00:26.241146 1466137 start.go:247] waiting for cluster config update ...
	I1119 02:00:26.241173 1466137 start.go:256] writing updated cluster config ...
	I1119 02:00:26.241501 1466137 ssh_runner.go:195] Run: rm -f paused
	I1119 02:00:26.246110 1466137 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:00:26.249496 1466137 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xmb7d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.254056 1466137 pod_ready.go:94] pod "coredns-66bc5c9577-xmb7d" is "Ready"
	I1119 02:00:26.254127 1466137 pod_ready.go:86] duration metric: took 4.567261ms for pod "coredns-66bc5c9577-xmb7d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.256333 1466137 pod_ready.go:83] waiting for pod "etcd-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.263228 1466137 pod_ready.go:94] pod "etcd-addons-238225" is "Ready"
	I1119 02:00:26.263255 1466137 pod_ready.go:86] duration metric: took 6.899514ms for pod "etcd-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.265932 1466137 pod_ready.go:83] waiting for pod "kube-apiserver-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.270854 1466137 pod_ready.go:94] pod "kube-apiserver-addons-238225" is "Ready"
	I1119 02:00:26.270879 1466137 pod_ready.go:86] duration metric: took 4.919421ms for pod "kube-apiserver-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.273537 1466137 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.650708 1466137 pod_ready.go:94] pod "kube-controller-manager-addons-238225" is "Ready"
	I1119 02:00:26.650735 1466137 pod_ready.go:86] duration metric: took 377.172372ms for pod "kube-controller-manager-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.849990 1466137 pod_ready.go:83] waiting for pod "kube-proxy-6dppw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:27.250626 1466137 pod_ready.go:94] pod "kube-proxy-6dppw" is "Ready"
	I1119 02:00:27.250656 1466137 pod_ready.go:86] duration metric: took 400.636804ms for pod "kube-proxy-6dppw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:27.500641 1466137 pod_ready.go:83] waiting for pod "kube-scheduler-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:27.850064 1466137 pod_ready.go:94] pod "kube-scheduler-addons-238225" is "Ready"
	I1119 02:00:27.850132 1466137 pod_ready.go:86] duration metric: took 349.462849ms for pod "kube-scheduler-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:27.850153 1466137 pod_ready.go:40] duration metric: took 1.60400953s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:00:27.918164 1466137 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 02:00:27.921455 1466137 out.go:179] * Done! kubectl is now configured to use "addons-238225" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 02:03:43 addons-238225 crio[827]: time="2025-11-19T02:03:43.923793888Z" level=info msg="Removed container 645b44bf5e5927ce3d5dae64ced8ae87f3602e9c27a70ef257ca75a23d04f096: kube-system/registry-creds-764b6fb674-6dd8r/registry-creds" id=9b9f4494-b94d-4c16-8839-bf058371326d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:03:44 addons-238225 crio[827]: time="2025-11-19T02:03:44.547308746Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-mxw5p/POD" id=c77523fe-5476-46cf-9578-cb61445be0df name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:03:44 addons-238225 crio[827]: time="2025-11-19T02:03:44.547376551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:03:44 addons-238225 crio[827]: time="2025-11-19T02:03:44.559022857Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-mxw5p Namespace:default ID:4a7c7bbab595bfe622516776a6a55237bf61914ca2186f18585defc9a76187dc UID:ffc399a3-1f3e-4791-9c52-964ee3174f1a NetNS:/var/run/netns/2bfc2712-ef7b-4dc9-9c00-a7be01624517 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001e36a50}] Aliases:map[]}"
	Nov 19 02:03:44 addons-238225 crio[827]: time="2025-11-19T02:03:44.559080456Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-mxw5p to CNI network \"kindnet\" (type=ptp)"
	Nov 19 02:03:44 addons-238225 crio[827]: time="2025-11-19T02:03:44.570835944Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-mxw5p Namespace:default ID:4a7c7bbab595bfe622516776a6a55237bf61914ca2186f18585defc9a76187dc UID:ffc399a3-1f3e-4791-9c52-964ee3174f1a NetNS:/var/run/netns/2bfc2712-ef7b-4dc9-9c00-a7be01624517 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001e36a50}] Aliases:map[]}"
	Nov 19 02:03:44 addons-238225 crio[827]: time="2025-11-19T02:03:44.570985223Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-mxw5p for CNI network kindnet (type=ptp)"
	Nov 19 02:03:44 addons-238225 crio[827]: time="2025-11-19T02:03:44.576732694Z" level=info msg="Ran pod sandbox 4a7c7bbab595bfe622516776a6a55237bf61914ca2186f18585defc9a76187dc with infra container: default/hello-world-app-5d498dc89-mxw5p/POD" id=c77523fe-5476-46cf-9578-cb61445be0df name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:03:44 addons-238225 crio[827]: time="2025-11-19T02:03:44.579386179Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=bab78c77-8a57-4310-9cc6-9ad63f83f0f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:03:44 addons-238225 crio[827]: time="2025-11-19T02:03:44.57962686Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=bab78c77-8a57-4310-9cc6-9ad63f83f0f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:03:44 addons-238225 crio[827]: time="2025-11-19T02:03:44.579698382Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=bab78c77-8a57-4310-9cc6-9ad63f83f0f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:03:44 addons-238225 crio[827]: time="2025-11-19T02:03:44.580780586Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=7e7c5629-96a6-409b-af29-959252dad9db name=/runtime.v1.ImageService/PullImage
	Nov 19 02:03:44 addons-238225 crio[827]: time="2025-11-19T02:03:44.582407389Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 19 02:03:45 addons-238225 crio[827]: time="2025-11-19T02:03:45.159958132Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=7e7c5629-96a6-409b-af29-959252dad9db name=/runtime.v1.ImageService/PullImage
	Nov 19 02:03:45 addons-238225 crio[827]: time="2025-11-19T02:03:45.160912415Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c9c23551-c5e9-4392-bffb-5e82db82e3a9 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:03:45 addons-238225 crio[827]: time="2025-11-19T02:03:45.16406094Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=75469b0a-425e-4bbf-ada2-9220e3bd401c name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:03:45 addons-238225 crio[827]: time="2025-11-19T02:03:45.175225317Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-mxw5p/hello-world-app" id=78d7652b-c7c5-4bfe-845f-a781b6b107ee name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:03:45 addons-238225 crio[827]: time="2025-11-19T02:03:45.175571742Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:03:45 addons-238225 crio[827]: time="2025-11-19T02:03:45.221705782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:03:45 addons-238225 crio[827]: time="2025-11-19T02:03:45.223539338Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e50c31b59adaa32a5703e630edd3a27659402b920602d2619af152ed10237eb6/merged/etc/passwd: no such file or directory"
	Nov 19 02:03:45 addons-238225 crio[827]: time="2025-11-19T02:03:45.223954209Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e50c31b59adaa32a5703e630edd3a27659402b920602d2619af152ed10237eb6/merged/etc/group: no such file or directory"
	Nov 19 02:03:45 addons-238225 crio[827]: time="2025-11-19T02:03:45.224950615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:03:45 addons-238225 crio[827]: time="2025-11-19T02:03:45.260518633Z" level=info msg="Created container 10d6cbbf54308da6163333e083a2cf4343da939303798bc70a2146562561dd31: default/hello-world-app-5d498dc89-mxw5p/hello-world-app" id=78d7652b-c7c5-4bfe-845f-a781b6b107ee name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:03:45 addons-238225 crio[827]: time="2025-11-19T02:03:45.264646277Z" level=info msg="Starting container: 10d6cbbf54308da6163333e083a2cf4343da939303798bc70a2146562561dd31" id=c349f279-b808-48eb-bba3-eab54939d120 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:03:45 addons-238225 crio[827]: time="2025-11-19T02:03:45.27228222Z" level=info msg="Started container" PID=7216 containerID=10d6cbbf54308da6163333e083a2cf4343da939303798bc70a2146562561dd31 description=default/hello-world-app-5d498dc89-mxw5p/hello-world-app id=c349f279-b808-48eb-bba3-eab54939d120 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a7c7bbab595bfe622516776a6a55237bf61914ca2186f18585defc9a76187dc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	10d6cbbf54308       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   4a7c7bbab595b       hello-world-app-5d498dc89-mxw5p            default
	24785d76d60a4       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             2 seconds ago            Exited              registry-creds                           2                   2227bdd4d95bf       registry-creds-764b6fb674-6dd8r            kube-system
	f465649968b73       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   2ffd8a320850a       nginx                                      default
	27de6957d02e9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   639abf5992c53       busybox                                    default
	daa6bf3f2ef67       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   2aacd67009a16       ingress-nginx-controller-6c8bf45fb-gsl4s   ingress-nginx
	9c232d33326a7       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   4572422659b3b       csi-hostpathplugin-rfpfq                   kube-system
	772cfe62f02aa       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   4572422659b3b       csi-hostpathplugin-rfpfq                   kube-system
	21ceae69f9b81       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   4572422659b3b       csi-hostpathplugin-rfpfq                   kube-system
	d64b782c68c24       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   4572422659b3b       csi-hostpathplugin-rfpfq                   kube-system
	aa2392de9c092       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   ef5b13b05972f       gcp-auth-78565c9fb4-nq6z4                  gcp-auth
	2d6558779eef9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   84a8a4e10dbde       gadget-9r7cc                               gadget
	5e8f0f7f44431       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   4572422659b3b       csi-hostpathplugin-rfpfq                   kube-system
	7011495914d49       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             3 minutes ago            Exited              patch                                    1                   12993dbcf9ac8       ingress-nginx-admission-patch-vwbkh        ingress-nginx
	25d05604a9dbf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   5db638a46bc3a       ingress-nginx-admission-create-7dtkx       ingress-nginx
	b38eaf566b86b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   3fb5e98f451bb       registry-proxy-7m7l6                       kube-system
	4965ee5c7f78b       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   0cc824b0072b3       cloud-spanner-emulator-6f9fcf858b-hklsv    default
	913d1dc20a3a2       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     4 minutes ago            Running             nvidia-device-plugin-ctr                 0                   6e3609ae0e684       nvidia-device-plugin-daemonset-fb27k       kube-system
	d4baa1f0a47d3       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   25d89c4d90a2c       registry-6b586f9694-2n7m4                  kube-system
	91ecb63aa939e       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   931506926cbcd       snapshot-controller-7d9fbc56b8-5fsqs       kube-system
	c53519ba9e004       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   08242009eb98f       csi-hostpath-resizer-0                     kube-system
	3405c2a94e1bf       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   5d627c54ecfea       yakd-dashboard-5ff678cb9-97cnn             yakd-dashboard
	dcca1b842fe44       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   4 minutes ago            Running             csi-external-health-monitor-controller   0                   4572422659b3b       csi-hostpathplugin-rfpfq                   kube-system
	e4c59f62ececb       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   6c70658571a4e       metrics-server-85b7d694d7-wjr8r            kube-system
	e46dac206369b       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   c2b909d8d7192       local-path-provisioner-648f6765c9-t2frb    local-path-storage
	be9d5b6bedfbc       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   3c88179b82e2a       kube-ingress-dns-minikube                  kube-system
	d79a6a486de50       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   0358d181f6f4b       snapshot-controller-7d9fbc56b8-x5sfx       kube-system
	99e3704db1eb4       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   bc61a4410b44b       csi-hostpath-attacher-0                    kube-system
	b94070f6dc6d4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   197105b85621a       coredns-66bc5c9577-xmb7d                   kube-system
	d0f307f4b6c34       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   0dff9d40ff5e5       storage-provisioner                        kube-system
	0c54dc25c8ad5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   fadc826f254d1       kube-proxy-6dppw                           kube-system
	a841f7bd1c931       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   7ed860b050687       kindnet-8wgcz                              kube-system
	76ee598a60e1e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   1b70a5487703d       kube-scheduler-addons-238225               kube-system
	a757a1a6114f8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   ae1622a386a04       kube-apiserver-addons-238225               kube-system
	7a77a55a81c01       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   1df5d8abbca42       kube-controller-manager-addons-238225      kube-system
	85abfad90a4c2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   47b2af3c18cf3       etcd-addons-238225                         kube-system
	
	
	==> coredns [b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96] <==
	[INFO] 10.244.0.12:48347 - 16613 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002687744s
	[INFO] 10.244.0.12:48347 - 60105 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000109789s
	[INFO] 10.244.0.12:48347 - 1641 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000106137s
	[INFO] 10.244.0.12:50412 - 33421 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000275486s
	[INFO] 10.244.0.12:50412 - 33199 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000311193s
	[INFO] 10.244.0.12:33771 - 59317 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108172s
	[INFO] 10.244.0.12:33771 - 59128 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081122s
	[INFO] 10.244.0.12:53249 - 32158 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101601s
	[INFO] 10.244.0.12:53249 - 31986 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082796s
	[INFO] 10.244.0.12:47553 - 3677 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001327076s
	[INFO] 10.244.0.12:47553 - 3480 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.0013841s
	[INFO] 10.244.0.12:43508 - 33585 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000111463s
	[INFO] 10.244.0.12:43508 - 33388 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000291485s
	[INFO] 10.244.0.20:35290 - 30133 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000197187s
	[INFO] 10.244.0.20:53840 - 41496 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000129801s
	[INFO] 10.244.0.20:41539 - 24321 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000173237s
	[INFO] 10.244.0.20:59633 - 9891 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000249337s
	[INFO] 10.244.0.20:42981 - 21698 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00018945s
	[INFO] 10.244.0.20:33488 - 2307 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000345572s
	[INFO] 10.244.0.20:54299 - 57282 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001920467s
	[INFO] 10.244.0.20:35789 - 26998 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002393608s
	[INFO] 10.244.0.20:57162 - 53250 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003201679s
	[INFO] 10.244.0.20:40510 - 521 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003184408s
	[INFO] 10.244.0.23:60718 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000198812s
	[INFO] 10.244.0.23:58164 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164934s
	
	
	==> describe nodes <==
	Name:               addons-238225
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-238225
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=addons-238225
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T01_58_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-238225
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-238225"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 01:58:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-238225
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:03:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:03:35 +0000   Wed, 19 Nov 2025 01:58:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:03:35 +0000   Wed, 19 Nov 2025 01:58:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:03:35 +0000   Wed, 19 Nov 2025 01:58:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:03:35 +0000   Wed, 19 Nov 2025 01:59:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-238225
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                b9b6b0e2-598d-450e-a134-2ff248f1e4ea
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  default                     cloud-spanner-emulator-6f9fcf858b-hklsv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  default                     hello-world-app-5d498dc89-mxw5p             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-9r7cc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  gcp-auth                    gcp-auth-78565c9fb4-nq6z4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-gsl4s    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m6s
	  kube-system                 coredns-66bc5c9577-xmb7d                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m12s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 csi-hostpathplugin-rfpfq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 etcd-addons-238225                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m17s
	  kube-system                 kindnet-8wgcz                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m13s
	  kube-system                 kube-apiserver-addons-238225                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-controller-manager-addons-238225       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-proxy-6dppw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-scheduler-addons-238225                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 metrics-server-85b7d694d7-wjr8r             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m7s
	  kube-system                 nvidia-device-plugin-daemonset-fb27k        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 registry-6b586f9694-2n7m4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 registry-creds-764b6fb674-6dd8r             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 registry-proxy-7m7l6                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 snapshot-controller-7d9fbc56b8-5fsqs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 snapshot-controller-7d9fbc56b8-x5sfx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  local-path-storage          local-path-provisioner-648f6765c9-t2frb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-97cnn              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m10s  kube-proxy       
	  Normal   Starting                 5m17s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m17s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m17s  kubelet          Node addons-238225 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m17s  kubelet          Node addons-238225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m17s  kubelet          Node addons-238225 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m14s  node-controller  Node addons-238225 event: Registered Node addons-238225 in Controller
	  Normal   NodeReady                4m31s  kubelet          Node addons-238225 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 01:56] kauditd_printk_skb: 8 callbacks suppressed
	[Nov19 01:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1] <==
	{"level":"warn","ts":"2025-11-19T01:58:24.379140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.395187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.414512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.430538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.449240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.471096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.489374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.506557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.515826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.541855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.558207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.568677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.590175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.602103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.627010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.674568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.743992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.754236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.939765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:40.806291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:40.824309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:59:02.992547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:59:03.011383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:59:03.033152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:59:03.048528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36060","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [aa2392de9c0927f647992fb8c60e46f193c1b89e2d5d0b8d69a02b9222a4fd7b] <==
	2025/11/19 02:00:03 GCP Auth Webhook started!
	2025/11/19 02:00:28 Ready to marshal response ...
	2025/11/19 02:00:28 Ready to write response ...
	2025/11/19 02:00:28 Ready to marshal response ...
	2025/11/19 02:00:28 Ready to write response ...
	2025/11/19 02:00:28 Ready to marshal response ...
	2025/11/19 02:00:28 Ready to write response ...
	2025/11/19 02:00:48 Ready to marshal response ...
	2025/11/19 02:00:48 Ready to write response ...
	2025/11/19 02:00:51 Ready to marshal response ...
	2025/11/19 02:00:51 Ready to write response ...
	2025/11/19 02:00:51 Ready to marshal response ...
	2025/11/19 02:00:51 Ready to write response ...
	2025/11/19 02:01:10 Ready to marshal response ...
	2025/11/19 02:01:10 Ready to write response ...
	2025/11/19 02:01:14 Ready to marshal response ...
	2025/11/19 02:01:14 Ready to write response ...
	2025/11/19 02:01:24 Ready to marshal response ...
	2025/11/19 02:01:24 Ready to write response ...
	2025/11/19 02:01:34 Ready to marshal response ...
	2025/11/19 02:01:34 Ready to write response ...
	2025/11/19 02:03:44 Ready to marshal response ...
	2025/11/19 02:03:44 Ready to write response ...
	
	
	==> kernel <==
	 02:03:46 up  9:45,  0 user,  load average: 0.43, 1.11, 0.94
	Linux addons-238225 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884] <==
	I1119 02:01:44.678337       1 main.go:301] handling current node
	I1119 02:01:54.677865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:01:54.677904       1 main.go:301] handling current node
	I1119 02:02:04.677823       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:02:04.677854       1 main.go:301] handling current node
	I1119 02:02:14.678025       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:02:14.678059       1 main.go:301] handling current node
	I1119 02:02:24.680131       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:02:24.680168       1 main.go:301] handling current node
	I1119 02:02:34.680948       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:02:34.681052       1 main.go:301] handling current node
	I1119 02:02:44.677480       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:02:44.677533       1 main.go:301] handling current node
	I1119 02:02:54.677364       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:02:54.677399       1 main.go:301] handling current node
	I1119 02:03:04.679278       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:03:04.679313       1 main.go:301] handling current node
	I1119 02:03:14.677496       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:03:14.677546       1 main.go:301] handling current node
	I1119 02:03:24.677488       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:03:24.677598       1 main.go:301] handling current node
	I1119 02:03:34.677554       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:03:34.677749       1 main.go:301] handling current node
	I1119 02:03:44.677833       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:03:44.677866       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4] <==
	W1119 01:58:40.816903       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1119 01:58:43.601093       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.24.64"}
	W1119 01:59:02.992019       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:59:03.009396       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:59:03.032916       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:59:03.048771       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:59:15.214761       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.24.64:443: connect: connection refused
	E1119 01:59:15.215016       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.24.64:443: connect: connection refused" logger="UnhandledError"
	W1119 01:59:15.217456       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.24.64:443: connect: connection refused
	E1119 01:59:15.217500       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.24.64:443: connect: connection refused" logger="UnhandledError"
	W1119 01:59:15.352912       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.24.64:443: connect: connection refused
	E1119 01:59:15.352958       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.24.64:443: connect: connection refused" logger="UnhandledError"
	W1119 01:59:33.819937       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 01:59:33.820005       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1119 01:59:33.821258       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.20.157:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.20.157:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.20.157:443: connect: connection refused" logger="UnhandledError"
	I1119 01:59:33.874873       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1119 01:59:33.888355       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1119 02:01:23.930385       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1119 02:01:24.259078       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.178.213"}
	I1119 02:01:26.146938       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1119 02:01:41.511099       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1119 02:03:44.368474       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.125.132"}
	
	
	==> kube-controller-manager [7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b] <==
	I1119 01:58:33.020431       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 01:58:33.020465       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 01:58:33.020914       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 01:58:33.021020       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 01:58:33.021244       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 01:58:33.021433       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 01:58:33.021618       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 01:58:33.021911       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 01:58:33.021949       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 01:58:33.022264       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 01:58:33.022822       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 01:58:33.023244       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 01:58:33.023282       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	E1119 01:58:39.076273       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1119 01:58:39.115632       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1119 01:59:02.984925       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 01:59:02.985193       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1119 01:59:02.985259       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1119 01:59:03.017020       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1119 01:59:03.021952       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1119 01:59:03.085486       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 01:59:03.123158       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 01:59:17.985530       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1119 01:59:33.090133       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 01:59:33.134845       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181] <==
	I1119 01:58:35.092774       1 server_linux.go:53] "Using iptables proxy"
	I1119 01:58:35.223803       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 01:58:35.324676       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 01:58:35.324709       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 01:58:35.324774       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 01:58:35.470235       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 01:58:35.470295       1 server_linux.go:132] "Using iptables Proxier"
	I1119 01:58:35.481094       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 01:58:35.481430       1 server.go:527] "Version info" version="v1.34.1"
	I1119 01:58:35.481446       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 01:58:35.495052       1 config.go:200] "Starting service config controller"
	I1119 01:58:35.495078       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 01:58:35.495097       1 config.go:106] "Starting endpoint slice config controller"
	I1119 01:58:35.495101       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 01:58:35.495113       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 01:58:35.495116       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 01:58:35.495831       1 config.go:309] "Starting node config controller"
	I1119 01:58:35.495844       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 01:58:35.495851       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 01:58:35.595418       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 01:58:35.595454       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 01:58:35.595496       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1] <==
	I1119 01:58:26.775771       1 serving.go:386] Generated self-signed cert in-memory
	I1119 01:58:28.528281       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 01:58:28.528313       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 01:58:28.534196       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 01:58:28.534311       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 01:58:28.534378       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 01:58:28.534414       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 01:58:28.534453       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 01:58:28.534496       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 01:58:28.534638       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 01:58:28.534708       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 01:58:28.635021       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 01:58:28.635093       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 01:58:28.635433       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:02:09 addons-238225 kubelet[1266]: I1119 02:02:09.352845    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-fb27k" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:02:18 addons-238225 kubelet[1266]: I1119 02:02:18.350276    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7m7l6" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:02:26 addons-238225 kubelet[1266]: I1119 02:02:26.350139    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-2n7m4" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:03:25 addons-238225 kubelet[1266]: I1119 02:03:25.350634    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-fb27k" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:03:26 addons-238225 kubelet[1266]: I1119 02:03:26.451602    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6dd8r" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:03:27 addons-238225 kubelet[1266]: I1119 02:03:27.835124    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6dd8r" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:03:27 addons-238225 kubelet[1266]: I1119 02:03:27.835187    1266 scope.go:117] "RemoveContainer" containerID="38faa8d2d7ad8fce9382382c09f721c10f92e3f37c2f8e61b814521822da525c"
	Nov 19 02:03:28 addons-238225 kubelet[1266]: I1119 02:03:28.846198    1266 scope.go:117] "RemoveContainer" containerID="38faa8d2d7ad8fce9382382c09f721c10f92e3f37c2f8e61b814521822da525c"
	Nov 19 02:03:28 addons-238225 kubelet[1266]: I1119 02:03:28.846608    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6dd8r" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:03:28 addons-238225 kubelet[1266]: I1119 02:03:28.846656    1266 scope.go:117] "RemoveContainer" containerID="645b44bf5e5927ce3d5dae64ced8ae87f3602e9c27a70ef257ca75a23d04f096"
	Nov 19 02:03:28 addons-238225 kubelet[1266]: E1119 02:03:28.846848    1266 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-6dd8r_kube-system(ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7)\"" pod="kube-system/registry-creds-764b6fb674-6dd8r" podUID="ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7"
	Nov 19 02:03:29 addons-238225 kubelet[1266]: E1119 02:03:29.454788    1266 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/be3fccc1ea8f13ce9be05abe4a65e172680155b379ebb64684811280944516f9/diff" to get inode usage: stat /var/lib/containers/storage/overlay/be3fccc1ea8f13ce9be05abe4a65e172680155b379ebb64684811280944516f9/diff: no such file or directory, extraDiskErr: <nil>
	Nov 19 02:03:29 addons-238225 kubelet[1266]: I1119 02:03:29.851546    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6dd8r" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:03:29 addons-238225 kubelet[1266]: I1119 02:03:29.851607    1266 scope.go:117] "RemoveContainer" containerID="645b44bf5e5927ce3d5dae64ced8ae87f3602e9c27a70ef257ca75a23d04f096"
	Nov 19 02:03:29 addons-238225 kubelet[1266]: E1119 02:03:29.851758    1266 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-6dd8r_kube-system(ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7)\"" pod="kube-system/registry-creds-764b6fb674-6dd8r" podUID="ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7"
	Nov 19 02:03:35 addons-238225 kubelet[1266]: I1119 02:03:35.350903    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7m7l6" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:03:43 addons-238225 kubelet[1266]: I1119 02:03:43.350891    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6dd8r" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:03:43 addons-238225 kubelet[1266]: I1119 02:03:43.351380    1266 scope.go:117] "RemoveContainer" containerID="645b44bf5e5927ce3d5dae64ced8ae87f3602e9c27a70ef257ca75a23d04f096"
	Nov 19 02:03:43 addons-238225 kubelet[1266]: I1119 02:03:43.901750    1266 scope.go:117] "RemoveContainer" containerID="645b44bf5e5927ce3d5dae64ced8ae87f3602e9c27a70ef257ca75a23d04f096"
	Nov 19 02:03:43 addons-238225 kubelet[1266]: I1119 02:03:43.902403    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-6dd8r" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:03:43 addons-238225 kubelet[1266]: I1119 02:03:43.902461    1266 scope.go:117] "RemoveContainer" containerID="24785d76d60a493b4c91dda5d0cde976ac356362f4a2effe58d4e06d3840542a"
	Nov 19 02:03:43 addons-238225 kubelet[1266]: E1119 02:03:43.902939    1266 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-6dd8r_kube-system(ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7)\"" pod="kube-system/registry-creds-764b6fb674-6dd8r" podUID="ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7"
	Nov 19 02:03:44 addons-238225 kubelet[1266]: E1119 02:03:44.229920    1266 status_manager.go:1018] "Failed to get status for pod" err="pods \"hello-world-app-5d498dc89-mxw5p\" is forbidden: User \"system:node:addons-238225\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-238225' and this object" podUID="ffc399a3-1f3e-4791-9c52-964ee3174f1a" pod="default/hello-world-app-5d498dc89-mxw5p"
	Nov 19 02:03:44 addons-238225 kubelet[1266]: I1119 02:03:44.288161    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-686dw\" (UniqueName: \"kubernetes.io/projected/ffc399a3-1f3e-4791-9c52-964ee3174f1a-kube-api-access-686dw\") pod \"hello-world-app-5d498dc89-mxw5p\" (UID: \"ffc399a3-1f3e-4791-9c52-964ee3174f1a\") " pod="default/hello-world-app-5d498dc89-mxw5p"
	Nov 19 02:03:44 addons-238225 kubelet[1266]: I1119 02:03:44.288211    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ffc399a3-1f3e-4791-9c52-964ee3174f1a-gcp-creds\") pod \"hello-world-app-5d498dc89-mxw5p\" (UID: \"ffc399a3-1f3e-4791-9c52-964ee3174f1a\") " pod="default/hello-world-app-5d498dc89-mxw5p"
	
	
	==> storage-provisioner [d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77] <==
	W1119 02:03:21.414291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:23.417648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:23.421949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:25.424787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:25.429570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:27.433203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:27.438371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:29.442838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:29.449552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:31.453413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:31.460606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:33.463977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:33.468576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:35.472085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:35.476379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:37.479610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:37.484058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:39.487291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:39.491608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:41.495202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:41.499521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:43.502527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:43.507193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:45.511020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:03:45.516843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-238225 -n addons-238225
helpers_test.go:269: (dbg) Run:  kubectl --context addons-238225 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-7dtkx ingress-nginx-admission-patch-vwbkh
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-238225 describe pod ingress-nginx-admission-create-7dtkx ingress-nginx-admission-patch-vwbkh
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-238225 describe pod ingress-nginx-admission-create-7dtkx ingress-nginx-admission-patch-vwbkh: exit status 1 (78.638585ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7dtkx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vwbkh" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-238225 describe pod ingress-nginx-admission-create-7dtkx ingress-nginx-admission-patch-vwbkh: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (268.7471ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:03:47.485380 1475859 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:03:47.486764 1475859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:03:47.486786 1475859 out.go:374] Setting ErrFile to fd 2...
	I1119 02:03:47.486792 1475859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:03:47.487149 1475859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:03:47.487511 1475859 mustload.go:66] Loading cluster: addons-238225
	I1119 02:03:47.487880 1475859 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:03:47.487898 1475859 addons.go:607] checking whether the cluster is paused
	I1119 02:03:47.488001 1475859 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:03:47.488016 1475859 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:03:47.488460 1475859 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:03:47.506544 1475859 ssh_runner.go:195] Run: systemctl --version
	I1119 02:03:47.506608 1475859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:03:47.529264 1475859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:03:47.628377 1475859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:03:47.628540 1475859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:03:47.662428 1475859 cri.go:89] found id: "24785d76d60a493b4c91dda5d0cde976ac356362f4a2effe58d4e06d3840542a"
	I1119 02:03:47.662459 1475859 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:03:47.662464 1475859 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:03:47.662468 1475859 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:03:47.662472 1475859 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:03:47.662475 1475859 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:03:47.662478 1475859 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:03:47.662482 1475859 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:03:47.662485 1475859 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:03:47.662492 1475859 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:03:47.662495 1475859 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:03:47.662508 1475859 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:03:47.662512 1475859 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:03:47.662515 1475859 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:03:47.662518 1475859 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:03:47.662526 1475859 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:03:47.662541 1475859 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:03:47.662546 1475859 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:03:47.662549 1475859 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:03:47.662552 1475859 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:03:47.662557 1475859 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:03:47.662560 1475859 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:03:47.662563 1475859 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:03:47.662567 1475859 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:03:47.662570 1475859 cri.go:89] found id: ""
	I1119 02:03:47.662633 1475859 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:03:47.678283 1475859 out.go:203] 
	W1119 02:03:47.681205 1475859 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:03:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:03:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:03:47.681222 1475859 out.go:285] * 
	* 
	W1119 02:03:47.690281 1475859 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:03:47.693354 1475859 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable ingress --alsologtostderr -v=1: exit status 11 (267.55893ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:03:47.762412 1475970 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:03:47.763798 1475970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:03:47.763835 1475970 out.go:374] Setting ErrFile to fd 2...
	I1119 02:03:47.763858 1475970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:03:47.764126 1475970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:03:47.764447 1475970 mustload.go:66] Loading cluster: addons-238225
	I1119 02:03:47.764846 1475970 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:03:47.764883 1475970 addons.go:607] checking whether the cluster is paused
	I1119 02:03:47.765029 1475970 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:03:47.765059 1475970 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:03:47.765586 1475970 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:03:47.782987 1475970 ssh_runner.go:195] Run: systemctl --version
	I1119 02:03:47.783038 1475970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:03:47.800434 1475970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:03:47.899998 1475970 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:03:47.900113 1475970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:03:47.933046 1475970 cri.go:89] found id: "24785d76d60a493b4c91dda5d0cde976ac356362f4a2effe58d4e06d3840542a"
	I1119 02:03:47.933076 1475970 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:03:47.933082 1475970 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:03:47.933085 1475970 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:03:47.933089 1475970 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:03:47.933092 1475970 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:03:47.933122 1475970 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:03:47.933126 1475970 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:03:47.933130 1475970 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:03:47.933136 1475970 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:03:47.933144 1475970 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:03:47.933147 1475970 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:03:47.933150 1475970 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:03:47.933153 1475970 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:03:47.933156 1475970 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:03:47.933161 1475970 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:03:47.933167 1475970 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:03:47.933171 1475970 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:03:47.933175 1475970 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:03:47.933178 1475970 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:03:47.933200 1475970 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:03:47.933205 1475970 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:03:47.933223 1475970 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:03:47.933227 1475970 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:03:47.933230 1475970 cri.go:89] found id: ""
	I1119 02:03:47.933280 1475970 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:03:47.948857 1475970 out.go:203] 
	W1119 02:03:47.951728 1475970 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:03:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:03:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:03:47.951775 1475970 out.go:285] * 
	* 
	W1119 02:03:47.960658 1475970 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:03:47.963740 1475970 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-9r7cc" [266ccc73-b301-4853-bfe0-6f1cb158f5f8] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003844733s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (270.85621ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:01:23.389312 1473933 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:01:23.390699 1473933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:23.390719 1473933 out.go:374] Setting ErrFile to fd 2...
	I1119 02:01:23.390726 1473933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:23.390989 1473933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:01:23.391281 1473933 mustload.go:66] Loading cluster: addons-238225
	I1119 02:01:23.391683 1473933 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:23.391701 1473933 addons.go:607] checking whether the cluster is paused
	I1119 02:01:23.391802 1473933 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:23.391818 1473933 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:01:23.392263 1473933 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:01:23.410777 1473933 ssh_runner.go:195] Run: systemctl --version
	I1119 02:01:23.410841 1473933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:01:23.429499 1473933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:01:23.528028 1473933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:01:23.528114 1473933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:01:23.564418 1473933 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:01:23.564437 1473933 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:01:23.564442 1473933 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:01:23.564447 1473933 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:01:23.564451 1473933 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:01:23.564455 1473933 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:01:23.564458 1473933 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:01:23.564462 1473933 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:01:23.564465 1473933 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:01:23.564472 1473933 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:01:23.564475 1473933 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:01:23.564478 1473933 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:01:23.564481 1473933 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:01:23.564485 1473933 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:01:23.564488 1473933 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:01:23.564494 1473933 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:01:23.564497 1473933 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:01:23.564502 1473933 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:01:23.564505 1473933 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:01:23.564509 1473933 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:01:23.564515 1473933 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:01:23.564518 1473933 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:01:23.564522 1473933 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:01:23.564525 1473933 cri.go:89] found id: ""
	I1119 02:01:23.564572 1473933 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:01:23.583820 1473933 out.go:203] 
	W1119 02:01:23.587150 1473933 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:01:23.587168 1473933 out.go:285] * 
	* 
	W1119 02:01:23.596226 1473933 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:01:23.599760 1473933 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.616965ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003397628s
addons_test.go:463: (dbg) Run:  kubectl --context addons-238225 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (325.034585ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:01:17.087819 1473780 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:01:17.089586 1473780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:17.089635 1473780 out.go:374] Setting ErrFile to fd 2...
	I1119 02:01:17.089655 1473780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:17.089951 1473780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:01:17.090321 1473780 mustload.go:66] Loading cluster: addons-238225
	I1119 02:01:17.090737 1473780 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:17.090770 1473780 addons.go:607] checking whether the cluster is paused
	I1119 02:01:17.090895 1473780 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:17.090924 1473780 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:01:17.091404 1473780 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:01:17.114888 1473780 ssh_runner.go:195] Run: systemctl --version
	I1119 02:01:17.114941 1473780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:01:17.140233 1473780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:01:17.244078 1473780 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:01:17.244159 1473780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:01:17.283257 1473780 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:01:17.283277 1473780 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:01:17.283289 1473780 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:01:17.283294 1473780 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:01:17.283297 1473780 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:01:17.283301 1473780 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:01:17.283304 1473780 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:01:17.283307 1473780 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:01:17.283311 1473780 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:01:17.283316 1473780 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:01:17.283320 1473780 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:01:17.283323 1473780 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:01:17.283326 1473780 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:01:17.283329 1473780 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:01:17.283333 1473780 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:01:17.283337 1473780 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:01:17.283341 1473780 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:01:17.283344 1473780 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:01:17.283347 1473780 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:01:17.283350 1473780 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:01:17.283355 1473780 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:01:17.283358 1473780 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:01:17.283361 1473780 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:01:17.283364 1473780 cri.go:89] found id: ""
	I1119 02:01:17.283411 1473780 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:01:17.304201 1473780 out.go:203] 
	W1119 02:01:17.308271 1473780 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:01:17.308302 1473780 out.go:285] * 
	* 
	W1119 02:01:17.317243 1473780 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:01:17.322404 1473780 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.45s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1119 02:01:02.632466 1465377 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1119 02:01:02.636377 1465377 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1119 02:01:02.636405 1465377 kapi.go:107] duration metric: took 3.958684ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.969105ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-238225 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-238225 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [299caa07-7866-43f9-ad61-25ec292001dc] Pending
helpers_test.go:352: "task-pv-pod" [299caa07-7866-43f9-ad61-25ec292001dc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [299caa07-7866-43f9-ad61-25ec292001dc] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003556547s
addons_test.go:572: (dbg) Run:  kubectl --context addons-238225 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-238225 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-238225 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-238225 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-238225 delete pod task-pv-pod: (1.189640048s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-238225 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-238225 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-238225 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [613527d8-42c9-4aad-aa99-a78bae7f18f9] Pending
helpers_test.go:352: "task-pv-pod-restore" [613527d8-42c9-4aad-aa99-a78bae7f18f9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [613527d8-42c9-4aad-aa99-a78bae7f18f9] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003193069s
addons_test.go:614: (dbg) Run:  kubectl --context addons-238225 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-238225 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-238225 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (298.932239ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:01:41.943062 1474620 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:01:41.944653 1474620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:41.944676 1474620 out.go:374] Setting ErrFile to fd 2...
	I1119 02:01:41.944682 1474620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:41.944975 1474620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:01:41.945291 1474620 mustload.go:66] Loading cluster: addons-238225
	I1119 02:01:41.945719 1474620 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:41.945738 1474620 addons.go:607] checking whether the cluster is paused
	I1119 02:01:41.945848 1474620 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:41.945866 1474620 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:01:41.946323 1474620 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:01:41.964527 1474620 ssh_runner.go:195] Run: systemctl --version
	I1119 02:01:41.964582 1474620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:01:41.982308 1474620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:01:42.099409 1474620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:01:42.099528 1474620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:01:42.144844 1474620 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:01:42.144918 1474620 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:01:42.144948 1474620 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:01:42.144979 1474620 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:01:42.145027 1474620 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:01:42.145051 1474620 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:01:42.145130 1474620 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:01:42.145155 1474620 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:01:42.145177 1474620 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:01:42.145204 1474620 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:01:42.145239 1474620 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:01:42.145265 1474620 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:01:42.145287 1474620 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:01:42.145310 1474620 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:01:42.145330 1474620 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:01:42.145379 1474620 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:01:42.145409 1474620 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:01:42.145432 1474620 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:01:42.145464 1474620 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:01:42.145482 1474620 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:01:42.145560 1474620 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:01:42.145587 1474620 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:01:42.145614 1474620 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:01:42.145636 1474620 cri.go:89] found id: ""
	I1119 02:01:42.145731 1474620 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:01:42.166943 1474620 out.go:203] 
	W1119 02:01:42.170214 1474620 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:01:42.170247 1474620 out.go:285] * 
	* 
	W1119 02:01:42.181299 1474620 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:01:42.184846 1474620 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (277.191414ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:01:42.252363 1474664 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:01:42.253843 1474664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:42.253876 1474664 out.go:374] Setting ErrFile to fd 2...
	I1119 02:01:42.253884 1474664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:42.254330 1474664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:01:42.254850 1474664 mustload.go:66] Loading cluster: addons-238225
	I1119 02:01:42.255362 1474664 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:42.255389 1474664 addons.go:607] checking whether the cluster is paused
	I1119 02:01:42.255555 1474664 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:42.255576 1474664 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:01:42.256263 1474664 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:01:42.279259 1474664 ssh_runner.go:195] Run: systemctl --version
	I1119 02:01:42.279343 1474664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:01:42.300698 1474664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:01:42.404238 1474664 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:01:42.404378 1474664 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:01:42.435479 1474664 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:01:42.435506 1474664 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:01:42.435511 1474664 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:01:42.435526 1474664 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:01:42.435529 1474664 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:01:42.435537 1474664 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:01:42.435541 1474664 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:01:42.435546 1474664 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:01:42.435549 1474664 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:01:42.435560 1474664 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:01:42.435564 1474664 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:01:42.435567 1474664 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:01:42.435570 1474664 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:01:42.435573 1474664 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:01:42.435577 1474664 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:01:42.435584 1474664 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:01:42.435596 1474664 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:01:42.435601 1474664 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:01:42.435604 1474664 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:01:42.435607 1474664 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:01:42.435617 1474664 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:01:42.435627 1474664 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:01:42.435630 1474664 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:01:42.435632 1474664 cri.go:89] found id: ""
	I1119 02:01:42.435687 1474664 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:01:42.450615 1474664 out.go:203] 
	W1119 02:01:42.453622 1474664 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:01:42.453651 1474664 out.go:285] * 
	* 
	W1119 02:01:42.463129 1474664 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:01:42.466318 1474664 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (39.84s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-238225 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-238225 --alsologtostderr -v=1: exit status 11 (286.59118ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:00:59.139672 1472790 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:00:59.141138 1472790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:00:59.141185 1472790 out.go:374] Setting ErrFile to fd 2...
	I1119 02:00:59.141209 1472790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:00:59.141493 1472790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:00:59.142001 1472790 mustload.go:66] Loading cluster: addons-238225
	I1119 02:00:59.142431 1472790 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:00:59.142468 1472790 addons.go:607] checking whether the cluster is paused
	I1119 02:00:59.142595 1472790 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:00:59.142622 1472790 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:00:59.143089 1472790 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:00:59.160536 1472790 ssh_runner.go:195] Run: systemctl --version
	I1119 02:00:59.160589 1472790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:00:59.183750 1472790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:00:59.284259 1472790 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:00:59.284358 1472790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:00:59.320210 1472790 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:00:59.320230 1472790 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:00:59.320235 1472790 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:00:59.320238 1472790 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:00:59.320246 1472790 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:00:59.320250 1472790 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:00:59.320253 1472790 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:00:59.320256 1472790 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:00:59.320258 1472790 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:00:59.320264 1472790 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:00:59.320268 1472790 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:00:59.320271 1472790 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:00:59.320274 1472790 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:00:59.320277 1472790 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:00:59.320284 1472790 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:00:59.320289 1472790 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:00:59.320292 1472790 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:00:59.320296 1472790 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:00:59.320299 1472790 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:00:59.320302 1472790 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:00:59.320308 1472790 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:00:59.320311 1472790 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:00:59.320314 1472790 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:00:59.320317 1472790 cri.go:89] found id: ""
	I1119 02:00:59.320370 1472790 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:00:59.336402 1472790 out.go:203] 
	W1119 02:00:59.339761 1472790 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:00:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:00:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:00:59.339808 1472790 out.go:285] * 
	* 
	W1119 02:00:59.348783 1472790 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:00:59.351985 1472790 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-238225 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-238225
helpers_test.go:243: (dbg) docker inspect addons-238225:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8",
	        "Created": "2025-11-19T01:58:05.25132883Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1466580,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T01:58:05.312347122Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8/hosts",
	        "LogPath": "/var/lib/docker/containers/bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8/bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8-json.log",
	        "Name": "/addons-238225",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-238225:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-238225",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bb862ec6c86ae848db42de546db1fa5e2ba1b98abae1028bc8c65e63056c58e8",
	                "LowerDir": "/var/lib/docker/overlay2/c3d6f04405a20c13146ce0925cf4f362b7291938412bf7b19c1b899ae703e6ee-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c3d6f04405a20c13146ce0925cf4f362b7291938412bf7b19c1b899ae703e6ee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c3d6f04405a20c13146ce0925cf4f362b7291938412bf7b19c1b899ae703e6ee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c3d6f04405a20c13146ce0925cf4f362b7291938412bf7b19c1b899ae703e6ee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-238225",
	                "Source": "/var/lib/docker/volumes/addons-238225/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-238225",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-238225",
	                "name.minikube.sigs.k8s.io": "addons-238225",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "40172d9f065d4d169eb7efe20c2a1f540a540d918506289c1d5e8c4e2c96efb0",
	            "SandboxKey": "/var/run/docker/netns/40172d9f065d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34614"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34615"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34618"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34616"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34617"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-238225": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:ca:f7:4b:26:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "762382ad93c346744971eeb0989cc075ed25beb2a4ed8d7589e9c787cee67cfe",
	                    "EndpointID": "d7ebfa5485a67620e770e95408df74bf2a1c4a6bf0d5c7b02c95864b61584838",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-238225",
	                        "bb862ec6c86a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-238225 -n addons-238225
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-238225 logs -n 25: (1.828395882s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-051528 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-051528   │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 01:57 UTC │
	│ delete  │ -p download-only-051528                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-051528   │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 01:57 UTC │
	│ start   │ -o=json --download-only -p download-only-126461 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-126461   │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 01:57 UTC │
	│ delete  │ -p download-only-126461                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-126461   │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 01:57 UTC │
	│ delete  │ -p download-only-051528                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-051528   │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 01:57 UTC │
	│ delete  │ -p download-only-126461                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-126461   │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 01:57 UTC │
	│ start   │ --download-only -p download-docker-772744 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-772744 │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │                     │
	│ delete  │ -p download-docker-772744                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-772744 │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 01:57 UTC │
	│ start   │ --download-only -p binary-mirror-689753 --alsologtostderr --binary-mirror http://127.0.0.1:36283 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-689753   │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │                     │
	│ delete  │ -p binary-mirror-689753                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-689753   │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 01:57 UTC │
	│ addons  │ enable dashboard -p addons-238225                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │                     │
	│ addons  │ disable dashboard -p addons-238225                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │                     │
	│ start   │ -p addons-238225 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 02:00 UTC │
	│ addons  │ addons-238225 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	│ addons  │ addons-238225 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	│ addons  │ addons-238225 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	│ addons  │ addons-238225 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	│ ip      │ addons-238225 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │ 19 Nov 25 02:00 UTC │
	│ addons  │ addons-238225 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	│ addons  │ addons-238225 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	│ addons  │ enable headlamp -p addons-238225 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-238225          │ jenkins │ v1.37.0 │ 19 Nov 25 02:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 01:57:39
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 01:57:39.887628 1466137 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:57:39.887841 1466137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:57:39.887870 1466137 out.go:374] Setting ErrFile to fd 2...
	I1119 01:57:39.887889 1466137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:57:39.888171 1466137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 01:57:39.888671 1466137 out.go:368] Setting JSON to false
	I1119 01:57:39.889558 1466137 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34787,"bootTime":1763482673,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 01:57:39.889653 1466137 start.go:143] virtualization:  
	I1119 01:57:39.893175 1466137 out.go:179] * [addons-238225] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 01:57:39.896260 1466137 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 01:57:39.896340 1466137 notify.go:221] Checking for updates...
	I1119 01:57:39.902134 1466137 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 01:57:39.905102 1466137 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 01:57:39.908043 1466137 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 01:57:39.910916 1466137 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 01:57:39.913780 1466137 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 01:57:39.916781 1466137 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 01:57:39.940446 1466137 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 01:57:39.940584 1466137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:57:40.002115 1466137 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 01:57:39.992786597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 01:57:40.002234 1466137 docker.go:319] overlay module found
	I1119 01:57:40.012168 1466137 out.go:179] * Using the docker driver based on user configuration
	I1119 01:57:40.017606 1466137 start.go:309] selected driver: docker
	I1119 01:57:40.017638 1466137 start.go:930] validating driver "docker" against <nil>
	I1119 01:57:40.017654 1466137 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 01:57:40.018543 1466137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:57:40.080713 1466137 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 01:57:40.071603283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 01:57:40.080872 1466137 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 01:57:40.081111 1466137 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 01:57:40.083887 1466137 out.go:179] * Using Docker driver with root privileges
	I1119 01:57:40.086657 1466137 cni.go:84] Creating CNI manager for ""
	I1119 01:57:40.086729 1466137 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:57:40.086740 1466137 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 01:57:40.086825 1466137 start.go:353] cluster config:
	{Name:addons-238225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-238225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1119 01:57:40.090050 1466137 out.go:179] * Starting "addons-238225" primary control-plane node in "addons-238225" cluster
	I1119 01:57:40.092976 1466137 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 01:57:40.096012 1466137 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 01:57:40.099012 1466137 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:57:40.099053 1466137 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 01:57:40.099106 1466137 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 01:57:40.099118 1466137 cache.go:65] Caching tarball of preloaded images
	I1119 01:57:40.099213 1466137 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 01:57:40.099225 1466137 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 01:57:40.099709 1466137 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/config.json ...
	I1119 01:57:40.099759 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/config.json: {Name:mk0be708edd925bb7df5f8d5c43c2fb624d9f741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:57:40.116328 1466137 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1119 01:57:40.116461 1466137 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1119 01:57:40.116482 1466137 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1119 01:57:40.116487 1466137 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1119 01:57:40.116495 1466137 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1119 01:57:40.116501 1466137 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from local cache
	I1119 01:57:58.298287 1466137 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from cached tarball
	I1119 01:57:58.298328 1466137 cache.go:243] Successfully downloaded all kic artifacts
	I1119 01:57:58.298359 1466137 start.go:360] acquireMachinesLock for addons-238225: {Name:mk62d20918077dda75b87e2eea537d37ef4e35a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 01:57:58.299128 1466137 start.go:364] duration metric: took 745.554µs to acquireMachinesLock for "addons-238225"
	I1119 01:57:58.299177 1466137 start.go:93] Provisioning new machine with config: &{Name:addons-238225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-238225 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 01:57:58.299257 1466137 start.go:125] createHost starting for "" (driver="docker")
	I1119 01:57:58.302652 1466137 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1119 01:57:58.302892 1466137 start.go:159] libmachine.API.Create for "addons-238225" (driver="docker")
	I1119 01:57:58.302939 1466137 client.go:173] LocalClient.Create starting
	I1119 01:57:58.303042 1466137 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem
	I1119 01:57:58.655933 1466137 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem
	I1119 01:57:58.731765 1466137 cli_runner.go:164] Run: docker network inspect addons-238225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 01:57:58.747475 1466137 cli_runner.go:211] docker network inspect addons-238225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 01:57:58.747567 1466137 network_create.go:284] running [docker network inspect addons-238225] to gather additional debugging logs...
	I1119 01:57:58.747589 1466137 cli_runner.go:164] Run: docker network inspect addons-238225
	W1119 01:57:58.762882 1466137 cli_runner.go:211] docker network inspect addons-238225 returned with exit code 1
	I1119 01:57:58.762914 1466137 network_create.go:287] error running [docker network inspect addons-238225]: docker network inspect addons-238225: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-238225 not found
	I1119 01:57:58.762940 1466137 network_create.go:289] output of [docker network inspect addons-238225]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-238225 not found
	
	** /stderr **
	I1119 01:57:58.763036 1466137 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 01:57:58.778718 1466137 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400192fba0}
	I1119 01:57:58.778756 1466137 network_create.go:124] attempt to create docker network addons-238225 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1119 01:57:58.778814 1466137 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-238225 addons-238225
	I1119 01:57:58.832923 1466137 network_create.go:108] docker network addons-238225 192.168.49.0/24 created
	I1119 01:57:58.832956 1466137 kic.go:121] calculated static IP "192.168.49.2" for the "addons-238225" container
	I1119 01:57:58.833046 1466137 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 01:57:58.850182 1466137 cli_runner.go:164] Run: docker volume create addons-238225 --label name.minikube.sigs.k8s.io=addons-238225 --label created_by.minikube.sigs.k8s.io=true
	I1119 01:57:58.869962 1466137 oci.go:103] Successfully created a docker volume addons-238225
	I1119 01:57:58.870053 1466137 cli_runner.go:164] Run: docker run --rm --name addons-238225-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-238225 --entrypoint /usr/bin/test -v addons-238225:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 01:58:00.761991 1466137 cli_runner.go:217] Completed: docker run --rm --name addons-238225-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-238225 --entrypoint /usr/bin/test -v addons-238225:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib: (1.891899795s)
	I1119 01:58:00.762019 1466137 oci.go:107] Successfully prepared a docker volume addons-238225
	I1119 01:58:00.762078 1466137 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:58:00.762092 1466137 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 01:58:00.762165 1466137 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-238225:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 01:58:05.178333 1466137 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-238225:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.416129642s)
	I1119 01:58:05.178366 1466137 kic.go:203] duration metric: took 4.416270045s to extract preloaded images to volume ...
	W1119 01:58:05.178496 1466137 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 01:58:05.178607 1466137 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 01:58:05.236744 1466137 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-238225 --name addons-238225 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-238225 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-238225 --network addons-238225 --ip 192.168.49.2 --volume addons-238225:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 01:58:05.550352 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Running}}
	I1119 01:58:05.571557 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:05.593696 1466137 cli_runner.go:164] Run: docker exec addons-238225 stat /var/lib/dpkg/alternatives/iptables
	I1119 01:58:05.648811 1466137 oci.go:144] the created container "addons-238225" has a running status.
	I1119 01:58:05.648841 1466137 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa...
	I1119 01:58:05.757849 1466137 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 01:58:05.780141 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:05.802052 1466137 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 01:58:05.802073 1466137 kic_runner.go:114] Args: [docker exec --privileged addons-238225 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 01:58:05.858091 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:05.887254 1466137 machine.go:94] provisionDockerMachine start ...
	I1119 01:58:05.887357 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:05.919833 1466137 main.go:143] libmachine: Using SSH client type: native
	I1119 01:58:05.920152 1466137 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34614 <nil> <nil>}
	I1119 01:58:05.920167 1466137 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 01:58:05.920817 1466137 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 01:58:09.061005 1466137 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-238225
	
	I1119 01:58:09.061085 1466137 ubuntu.go:182] provisioning hostname "addons-238225"
	I1119 01:58:09.061169 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:09.078223 1466137 main.go:143] libmachine: Using SSH client type: native
	I1119 01:58:09.078537 1466137 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34614 <nil> <nil>}
	I1119 01:58:09.078554 1466137 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-238225 && echo "addons-238225" | sudo tee /etc/hostname
	I1119 01:58:09.226025 1466137 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-238225
	
	I1119 01:58:09.226120 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:09.243606 1466137 main.go:143] libmachine: Using SSH client type: native
	I1119 01:58:09.243926 1466137 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34614 <nil> <nil>}
	I1119 01:58:09.243952 1466137 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-238225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-238225/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-238225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 01:58:09.389414 1466137 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 01:58:09.389448 1466137 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 01:58:09.389473 1466137 ubuntu.go:190] setting up certificates
	I1119 01:58:09.389483 1466137 provision.go:84] configureAuth start
	I1119 01:58:09.389562 1466137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-238225
	I1119 01:58:09.405592 1466137 provision.go:143] copyHostCerts
	I1119 01:58:09.405675 1466137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 01:58:09.405808 1466137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 01:58:09.405884 1466137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 01:58:09.405944 1466137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.addons-238225 san=[127.0.0.1 192.168.49.2 addons-238225 localhost minikube]
	I1119 01:58:09.667984 1466137 provision.go:177] copyRemoteCerts
	I1119 01:58:09.668054 1466137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 01:58:09.668095 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:09.686289 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:09.790013 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 01:58:09.806670 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 01:58:09.823790 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 01:58:09.840730 1466137 provision.go:87] duration metric: took 451.230783ms to configureAuth
	I1119 01:58:09.840754 1466137 ubuntu.go:206] setting minikube options for container-runtime
	I1119 01:58:09.840974 1466137 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:09.841090 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:09.857991 1466137 main.go:143] libmachine: Using SSH client type: native
	I1119 01:58:09.858326 1466137 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34614 <nil> <nil>}
	I1119 01:58:09.858346 1466137 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 01:58:10.152667 1466137 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 01:58:10.152691 1466137 machine.go:97] duration metric: took 4.26541208s to provisionDockerMachine
	I1119 01:58:10.152701 1466137 client.go:176] duration metric: took 11.849752219s to LocalClient.Create
	I1119 01:58:10.152718 1466137 start.go:167] duration metric: took 11.849822945s to libmachine.API.Create "addons-238225"
	I1119 01:58:10.152728 1466137 start.go:293] postStartSetup for "addons-238225" (driver="docker")
	I1119 01:58:10.152742 1466137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 01:58:10.152805 1466137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 01:58:10.152851 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:10.172016 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:10.272983 1466137 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 01:58:10.275945 1466137 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 01:58:10.275970 1466137 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 01:58:10.275981 1466137 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 01:58:10.276042 1466137 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 01:58:10.276070 1466137 start.go:296] duration metric: took 123.333104ms for postStartSetup
	I1119 01:58:10.276417 1466137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-238225
	I1119 01:58:10.292043 1466137 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/config.json ...
	I1119 01:58:10.292315 1466137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 01:58:10.292367 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:10.307911 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:10.402329 1466137 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 01:58:10.406715 1466137 start.go:128] duration metric: took 12.107442418s to createHost
	I1119 01:58:10.406741 1466137 start.go:83] releasing machines lock for "addons-238225", held for 12.107596818s
	I1119 01:58:10.406817 1466137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-238225
	I1119 01:58:10.422726 1466137 ssh_runner.go:195] Run: cat /version.json
	I1119 01:58:10.422785 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:10.422788 1466137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 01:58:10.422854 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:10.445665 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:10.447272 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:10.633790 1466137 ssh_runner.go:195] Run: systemctl --version
	I1119 01:58:10.640058 1466137 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 01:58:10.680314 1466137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 01:58:10.684527 1466137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 01:58:10.684619 1466137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 01:58:10.711033 1466137 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 01:58:10.711059 1466137 start.go:496] detecting cgroup driver to use...
	I1119 01:58:10.711125 1466137 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 01:58:10.711199 1466137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 01:58:10.728807 1466137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 01:58:10.742848 1466137 docker.go:218] disabling cri-docker service (if available) ...
	I1119 01:58:10.742919 1466137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 01:58:10.759018 1466137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 01:58:10.776842 1466137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 01:58:10.893379 1466137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 01:58:11.012170 1466137 docker.go:234] disabling docker service ...
	I1119 01:58:11.012241 1466137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 01:58:11.033812 1466137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 01:58:11.046728 1466137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 01:58:11.160940 1466137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 01:58:11.274852 1466137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 01:58:11.286921 1466137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 01:58:11.300834 1466137 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 01:58:11.300942 1466137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.309662 1466137 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 01:58:11.309786 1466137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.318817 1466137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.327369 1466137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.335727 1466137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 01:58:11.343430 1466137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.352009 1466137 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.364713 1466137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 01:58:11.373448 1466137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 01:58:11.381463 1466137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 01:58:11.389037 1466137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:58:11.503634 1466137 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 01:58:11.674864 1466137 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 01:58:11.675018 1466137 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 01:58:11.678819 1466137 start.go:564] Will wait 60s for crictl version
	I1119 01:58:11.678941 1466137 ssh_runner.go:195] Run: which crictl
	I1119 01:58:11.682339 1466137 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 01:58:11.705234 1466137 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 01:58:11.705423 1466137 ssh_runner.go:195] Run: crio --version
	I1119 01:58:11.733082 1466137 ssh_runner.go:195] Run: crio --version
	I1119 01:58:11.764601 1466137 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 01:58:11.767424 1466137 cli_runner.go:164] Run: docker network inspect addons-238225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 01:58:11.781609 1466137 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1119 01:58:11.785317 1466137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 01:58:11.794954 1466137 kubeadm.go:884] updating cluster {Name:addons-238225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-238225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 01:58:11.795081 1466137 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:58:11.795137 1466137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 01:58:11.829128 1466137 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 01:58:11.829148 1466137 crio.go:433] Images already preloaded, skipping extraction
	I1119 01:58:11.829203 1466137 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 01:58:11.852992 1466137 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 01:58:11.853014 1466137 cache_images.go:86] Images are preloaded, skipping loading
	I1119 01:58:11.853022 1466137 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1119 01:58:11.853109 1466137 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-238225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-238225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 01:58:11.853196 1466137 ssh_runner.go:195] Run: crio config
	I1119 01:58:11.922557 1466137 cni.go:84] Creating CNI manager for ""
	I1119 01:58:11.922580 1466137 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:58:11.922598 1466137 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 01:58:11.922641 1466137 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-238225 NodeName:addons-238225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 01:58:11.922802 1466137 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-238225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 01:58:11.922916 1466137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 01:58:11.930418 1466137 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 01:58:11.930527 1466137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 01:58:11.938001 1466137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1119 01:58:11.950566 1466137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 01:58:11.963170 1466137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1119 01:58:11.976223 1466137 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1119 01:58:11.979835 1466137 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 01:58:11.989395 1466137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:58:12.109291 1466137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 01:58:12.126019 1466137 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225 for IP: 192.168.49.2
	I1119 01:58:12.126042 1466137 certs.go:195] generating shared ca certs ...
	I1119 01:58:12.126059 1466137 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:12.126245 1466137 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 01:58:12.846969 1466137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt ...
	I1119 01:58:12.847002 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt: {Name:mk0c4361aeeaf7c6e5e4fb8de5c4717adb9c2334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:12.847894 1466137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key ...
	I1119 01:58:12.847951 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key: {Name:mkce782e72709e74ea14a8a7ccdc217d1e1d221c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:12.848736 1466137 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 01:58:13.019443 1466137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt ...
	I1119 01:58:13.019472 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt: {Name:mk2eb27b4a9cc79187840dd91a0f84ea78372129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:13.020288 1466137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key ...
	I1119 01:58:13.020300 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key: {Name:mk4f229732877afbb5a1f392429a97effead11d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:13.021047 1466137 certs.go:257] generating profile certs ...
	I1119 01:58:13.021113 1466137 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.key
	I1119 01:58:13.021125 1466137 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt with IP's: []
	I1119 01:58:13.291722 1466137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt ...
	I1119 01:58:13.291762 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: {Name:mk7c6f3478e869402733655745f3c649bc4cf27b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:13.291974 1466137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.key ...
	I1119 01:58:13.291987 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.key: {Name:mk5393eeaae98609485c90bd844759b781e24061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:13.292729 1466137 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.key.d3545e80
	I1119 01:58:13.292753 1466137 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.crt.d3545e80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1119 01:58:13.861856 1466137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.crt.d3545e80 ...
	I1119 01:58:13.861887 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.crt.d3545e80: {Name:mk0e7f3115a319e6424c82313a6ba7ca09e7de62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:13.862074 1466137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.key.d3545e80 ...
	I1119 01:58:13.862089 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.key.d3545e80: {Name:mk802419c573368863efff5022d2830176aeec97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:13.862174 1466137 certs.go:382] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.crt.d3545e80 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.crt
	I1119 01:58:13.862258 1466137 certs.go:386] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.key.d3545e80 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.key
	I1119 01:58:13.862316 1466137 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.key
	I1119 01:58:13.862337 1466137 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.crt with IP's: []
	I1119 01:58:14.087267 1466137 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.crt ...
	I1119 01:58:14.087300 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.crt: {Name:mkc267d713810694574eca8f448ad878ddde9de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:14.088139 1466137 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.key ...
	I1119 01:58:14.088160 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.key: {Name:mk30ba7bd08f0e5a57ca942eb2c0669db74541ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:14.088377 1466137 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 01:58:14.088427 1466137 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 01:58:14.088456 1466137 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 01:58:14.088497 1466137 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 01:58:14.089092 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 01:58:14.108086 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 01:58:14.127496 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 01:58:14.145713 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 01:58:14.163024 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 01:58:14.180262 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 01:58:14.197519 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 01:58:14.214975 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 01:58:14.231973 1466137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 01:58:14.249394 1466137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 01:58:14.261888 1466137 ssh_runner.go:195] Run: openssl version
	I1119 01:58:14.267860 1466137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 01:58:14.276028 1466137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:58:14.279598 1466137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:58:14.279655 1466137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 01:58:14.321296 1466137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 01:58:14.329640 1466137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 01:58:14.333151 1466137 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 01:58:14.333222 1466137 kubeadm.go:401] StartCluster: {Name:addons-238225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-238225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 01:58:14.333306 1466137 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 01:58:14.333364 1466137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 01:58:14.358947 1466137 cri.go:89] found id: ""
	I1119 01:58:14.359087 1466137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 01:58:14.366703 1466137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 01:58:14.374159 1466137 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 01:58:14.374260 1466137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 01:58:14.381688 1466137 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 01:58:14.381709 1466137 kubeadm.go:158] found existing configuration files:
	
	I1119 01:58:14.381759 1466137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 01:58:14.388970 1466137 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 01:58:14.389036 1466137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 01:58:14.395948 1466137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 01:58:14.403246 1466137 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 01:58:14.403313 1466137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 01:58:14.410353 1466137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 01:58:14.417712 1466137 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 01:58:14.417785 1466137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 01:58:14.424651 1466137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 01:58:14.431797 1466137 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 01:58:14.431910 1466137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 01:58:14.439158 1466137 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 01:58:14.479646 1466137 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 01:58:14.479711 1466137 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 01:58:14.519670 1466137 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 01:58:14.519749 1466137 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 01:58:14.519791 1466137 kubeadm.go:319] OS: Linux
	I1119 01:58:14.519850 1466137 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 01:58:14.519905 1466137 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 01:58:14.519959 1466137 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 01:58:14.520013 1466137 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 01:58:14.520067 1466137 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 01:58:14.520124 1466137 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 01:58:14.520175 1466137 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 01:58:14.520232 1466137 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 01:58:14.520285 1466137 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 01:58:14.613186 1466137 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 01:58:14.613347 1466137 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 01:58:14.613474 1466137 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 01:58:14.620897 1466137 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 01:58:14.627578 1466137 out.go:252]   - Generating certificates and keys ...
	I1119 01:58:14.627677 1466137 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 01:58:14.627751 1466137 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 01:58:15.007429 1466137 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 01:58:15.420693 1466137 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 01:58:16.890857 1466137 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 01:58:17.407029 1466137 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 01:58:17.741418 1466137 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 01:58:17.741720 1466137 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-238225 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 01:58:18.035303 1466137 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 01:58:18.035697 1466137 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-238225 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 01:58:18.379554 1466137 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 01:58:18.690194 1466137 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 01:58:18.917694 1466137 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 01:58:18.918017 1466137 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 01:58:19.047107 1466137 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 01:58:19.467560 1466137 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 01:58:19.876620 1466137 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 01:58:20.588082 1466137 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 01:58:20.656983 1466137 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 01:58:20.657621 1466137 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 01:58:20.662211 1466137 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 01:58:20.665471 1466137 out.go:252]   - Booting up control plane ...
	I1119 01:58:20.665599 1466137 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 01:58:20.665711 1466137 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 01:58:20.666404 1466137 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 01:58:20.681166 1466137 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 01:58:20.681565 1466137 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 01:58:20.688638 1466137 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 01:58:20.689294 1466137 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 01:58:20.689658 1466137 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 01:58:20.818044 1466137 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 01:58:20.818177 1466137 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 01:58:21.822146 1466137 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.004365008s
	I1119 01:58:21.825064 1466137 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 01:58:21.825340 1466137 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1119 01:58:21.825611 1466137 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 01:58:21.825872 1466137 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 01:58:26.937388 1466137 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.110950457s
	I1119 01:58:27.827281 1466137 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001622732s
	I1119 01:58:28.546119 1466137 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.719860441s
	I1119 01:58:28.584147 1466137 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 01:58:28.595199 1466137 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 01:58:28.608938 1466137 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 01:58:28.609151 1466137 kubeadm.go:319] [mark-control-plane] Marking the node addons-238225 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 01:58:28.621120 1466137 kubeadm.go:319] [bootstrap-token] Using token: qew20g.0239fhbjyet3v0oc
	I1119 01:58:28.624150 1466137 out.go:252]   - Configuring RBAC rules ...
	I1119 01:58:28.624278 1466137 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 01:58:28.628392 1466137 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 01:58:28.640715 1466137 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 01:58:28.644549 1466137 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 01:58:28.648548 1466137 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 01:58:28.652371 1466137 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 01:58:28.954233 1466137 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 01:58:29.388273 1466137 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 01:58:29.952893 1466137 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 01:58:29.954218 1466137 kubeadm.go:319] 
	I1119 01:58:29.954314 1466137 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 01:58:29.954323 1466137 kubeadm.go:319] 
	I1119 01:58:29.954405 1466137 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 01:58:29.954412 1466137 kubeadm.go:319] 
	I1119 01:58:29.954457 1466137 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 01:58:29.954535 1466137 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 01:58:29.954611 1466137 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 01:58:29.954625 1466137 kubeadm.go:319] 
	I1119 01:58:29.954688 1466137 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 01:58:29.954694 1466137 kubeadm.go:319] 
	I1119 01:58:29.954744 1466137 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 01:58:29.954749 1466137 kubeadm.go:319] 
	I1119 01:58:29.954803 1466137 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 01:58:29.954882 1466137 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 01:58:29.954958 1466137 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 01:58:29.954963 1466137 kubeadm.go:319] 
	I1119 01:58:29.955051 1466137 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 01:58:29.955131 1466137 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 01:58:29.955136 1466137 kubeadm.go:319] 
	I1119 01:58:29.955224 1466137 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qew20g.0239fhbjyet3v0oc \
	I1119 01:58:29.955332 1466137 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a \
	I1119 01:58:29.955353 1466137 kubeadm.go:319] 	--control-plane 
	I1119 01:58:29.955358 1466137 kubeadm.go:319] 
	I1119 01:58:29.955447 1466137 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 01:58:29.955451 1466137 kubeadm.go:319] 
	I1119 01:58:29.955542 1466137 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qew20g.0239fhbjyet3v0oc \
	I1119 01:58:29.955648 1466137 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a 
	I1119 01:58:29.958186 1466137 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 01:58:29.958419 1466137 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 01:58:29.958528 1466137 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 01:58:29.958543 1466137 cni.go:84] Creating CNI manager for ""
	I1119 01:58:29.958551 1466137 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:58:29.963511 1466137 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 01:58:29.966327 1466137 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 01:58:29.970402 1466137 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 01:58:29.970423 1466137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 01:58:29.982309 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 01:58:30.290424 1466137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 01:58:30.290583 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:30.290661 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-238225 minikube.k8s.io/updated_at=2025_11_19T01_58_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=addons-238225 minikube.k8s.io/primary=true
	I1119 01:58:30.435848 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:30.435909 1466137 ops.go:34] apiserver oom_adj: -16
	I1119 01:58:30.935974 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:31.436347 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:31.936147 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:32.436830 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:32.936309 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:33.436774 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:33.936570 1466137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 01:58:34.078063 1466137 kubeadm.go:1114] duration metric: took 3.787523418s to wait for elevateKubeSystemPrivileges
	I1119 01:58:34.078095 1466137 kubeadm.go:403] duration metric: took 19.74488099s to StartCluster
	I1119 01:58:34.078113 1466137 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:34.078254 1466137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 01:58:34.078661 1466137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:58:34.078871 1466137 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 01:58:34.079035 1466137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 01:58:34.079339 1466137 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:34.079385 1466137 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1119 01:58:34.079467 1466137 addons.go:70] Setting yakd=true in profile "addons-238225"
	I1119 01:58:34.079495 1466137 addons.go:239] Setting addon yakd=true in "addons-238225"
	I1119 01:58:34.079517 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.080055 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.080558 1466137 addons.go:70] Setting metrics-server=true in profile "addons-238225"
	I1119 01:58:34.080583 1466137 addons.go:239] Setting addon metrics-server=true in "addons-238225"
	I1119 01:58:34.080617 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.081049 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.083030 1466137 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-238225"
	I1119 01:58:34.083113 1466137 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-238225"
	I1119 01:58:34.083211 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.083576 1466137 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-238225"
	I1119 01:58:34.083661 1466137 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-238225"
	I1119 01:58:34.083701 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.085319 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.086913 1466137 addons.go:70] Setting registry=true in profile "addons-238225"
	I1119 01:58:34.088913 1466137 addons.go:239] Setting addon registry=true in "addons-238225"
	I1119 01:58:34.088950 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.089420 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.083817 1466137 addons.go:70] Setting cloud-spanner=true in profile "addons-238225"
	I1119 01:58:34.093329 1466137 addons.go:239] Setting addon cloud-spanner=true in "addons-238225"
	I1119 01:58:34.093396 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.094112 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.083826 1466137 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-238225"
	I1119 01:58:34.102090 1466137 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-238225"
	I1119 01:58:34.102136 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.102615 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088471 1466137 addons.go:70] Setting registry-creds=true in profile "addons-238225"
	I1119 01:58:34.103260 1466137 addons.go:239] Setting addon registry-creds=true in "addons-238225"
	I1119 01:58:34.083834 1466137 addons.go:70] Setting gcp-auth=true in profile "addons-238225"
	I1119 01:58:34.103304 1466137 mustload.go:66] Loading cluster: addons-238225
	I1119 01:58:34.083837 1466137 addons.go:70] Setting ingress=true in profile "addons-238225"
	I1119 01:58:34.103371 1466137 addons.go:239] Setting addon ingress=true in "addons-238225"
	I1119 01:58:34.103406 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.103852 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.083831 1466137 addons.go:70] Setting default-storageclass=true in profile "addons-238225"
	I1119 01:58:34.127006 1466137 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-238225"
	I1119 01:58:34.128587 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.083844 1466137 addons.go:70] Setting ingress-dns=true in profile "addons-238225"
	I1119 01:58:34.137695 1466137 addons.go:239] Setting addon ingress-dns=true in "addons-238225"
	I1119 01:58:34.137746 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.138209 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.083848 1466137 addons.go:70] Setting inspektor-gadget=true in profile "addons-238225"
	I1119 01:58:34.148258 1466137 addons.go:239] Setting addon inspektor-gadget=true in "addons-238225"
	I1119 01:58:34.148299 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.148782 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088485 1466137 addons.go:70] Setting storage-provisioner=true in profile "addons-238225"
	I1119 01:58:34.148986 1466137 addons.go:239] Setting addon storage-provisioner=true in "addons-238225"
	I1119 01:58:34.149011 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.149892 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088494 1466137 addons.go:70] Setting volcano=true in profile "addons-238225"
	I1119 01:58:34.161057 1466137 addons.go:239] Setting addon volcano=true in "addons-238225"
	I1119 01:58:34.161108 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.161620 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088490 1466137 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-238225"
	I1119 01:58:34.171292 1466137 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-238225"
	I1119 01:58:34.171649 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088498 1466137 addons.go:70] Setting volumesnapshots=true in profile "addons-238225"
	I1119 01:58:34.175851 1466137 addons.go:239] Setting addon volumesnapshots=true in "addons-238225"
	I1119 01:58:34.175905 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.176365 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088885 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.088895 1466137 out.go:179] * Verifying Kubernetes components...
	I1119 01:58:34.203144 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.203636 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.223591 1466137 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 01:58:34.223891 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.271323 1466137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 01:58:34.329536 1466137 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1119 01:58:34.346815 1466137 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1119 01:58:34.350717 1466137 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 01:58:34.350772 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1119 01:58:34.350862 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.356498 1466137 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1119 01:58:34.360344 1466137 out.go:179]   - Using image docker.io/registry:3.0.0
	I1119 01:58:34.367739 1466137 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1119 01:58:34.380775 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1119 01:58:34.386778 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1119 01:58:34.389817 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1119 01:58:34.392929 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1119 01:58:34.395920 1466137 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1119 01:58:34.396256 1466137 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 01:58:34.396277 1466137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 01:58:34.396346 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	W1119 01:58:34.403532 1466137 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1119 01:58:34.406921 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1119 01:58:34.407120 1466137 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1119 01:58:34.407133 1466137 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1119 01:58:34.407205 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.421641 1466137 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1119 01:58:34.421721 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1119 01:58:34.421827 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.429862 1466137 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 01:58:34.433258 1466137 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-238225"
	I1119 01:58:34.433298 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.438025 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.451420 1466137 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:58:34.451562 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1119 01:58:34.456323 1466137 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:58:34.458290 1466137 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1119 01:58:34.460418 1466137 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 01:58:34.460441 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1119 01:58:34.460510 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.466734 1466137 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1119 01:58:34.466761 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1119 01:58:34.481968 1466137 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1119 01:58:34.486911 1466137 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1119 01:58:34.486935 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1119 01:58:34.487009 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.491885 1466137 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 01:58:34.491905 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 01:58:34.491974 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.509711 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.521944 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1119 01:58:34.522361 1466137 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1119 01:58:34.553580 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1119 01:58:34.557233 1466137 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1119 01:58:34.560143 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1119 01:58:34.560174 1466137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1119 01:58:34.560273 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.565718 1466137 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1119 01:58:34.574700 1466137 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 01:58:34.574788 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1119 01:58:34.574947 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.595776 1466137 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 01:58:34.595800 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1119 01:58:34.595883 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.610041 1466137 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1119 01:58:34.610070 1466137 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1119 01:58:34.610156 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.610556 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.619430 1466137 addons.go:239] Setting addon default-storageclass=true in "addons-238225"
	I1119 01:58:34.619477 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.619913 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:34.621459 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.641846 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:34.660229 1466137 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1119 01:58:34.660283 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.661346 1466137 out.go:179]   - Using image docker.io/busybox:stable
	I1119 01:58:34.693638 1466137 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1119 01:58:34.673856 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.698830 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.674449 1466137 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 01:58:34.699649 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1119 01:58:34.699838 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.701619 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.702063 1466137 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 01:58:34.702074 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1119 01:58:34.702122 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.726930 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.735551 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.745439 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.773902 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.778758 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.797978 1466137 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 01:58:34.797998 1466137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 01:58:34.798058 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:34.799941 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.837794 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.844430 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.847039 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:34.902520 1466137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 01:58:34.902721 1466137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 01:58:35.204139 1466137 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1119 01:58:35.204212 1466137 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1119 01:58:35.282417 1466137 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1119 01:58:35.282490 1466137 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1119 01:58:35.287825 1466137 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1119 01:58:35.287847 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1119 01:58:35.307204 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1119 01:58:35.331503 1466137 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1119 01:58:35.331574 1466137 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1119 01:58:35.338465 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1119 01:58:35.340712 1466137 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 01:58:35.340776 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1119 01:58:35.394454 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 01:58:35.402303 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 01:58:35.409662 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1119 01:58:35.424396 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 01:58:35.432458 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 01:58:35.434260 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 01:58:35.434934 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 01:58:35.469164 1466137 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1119 01:58:35.469237 1466137 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1119 01:58:35.507232 1466137 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1119 01:58:35.507309 1466137 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1119 01:58:35.515372 1466137 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 01:58:35.515452 1466137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 01:58:35.519313 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1119 01:58:35.519388 1466137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1119 01:58:35.520176 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 01:58:35.567831 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 01:58:35.601141 1466137 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1119 01:58:35.601215 1466137 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1119 01:58:35.663223 1466137 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1119 01:58:35.663292 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1119 01:58:35.663537 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1119 01:58:35.663570 1466137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1119 01:58:35.708764 1466137 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 01:58:35.708845 1466137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 01:58:35.859058 1466137 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1119 01:58:35.859132 1466137 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1119 01:58:35.865400 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1119 01:58:35.865472 1466137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1119 01:58:35.882002 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1119 01:58:35.952711 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 01:58:36.001653 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1119 01:58:36.001728 1466137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1119 01:58:36.010332 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1119 01:58:36.010417 1466137 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1119 01:58:36.193099 1466137 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:58:36.193172 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1119 01:58:36.238272 1466137 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1119 01:58:36.238350 1466137 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1119 01:58:36.449743 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:58:36.514613 1466137 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1119 01:58:36.514688 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1119 01:58:36.562006 1466137 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.659251096s)
	I1119 01:58:36.562088 1466137 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1119 01:58:36.562825 1466137 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.660279703s)
	I1119 01:58:36.564115 1466137 node_ready.go:35] waiting up to 6m0s for node "addons-238225" to be "Ready" ...
	I1119 01:58:36.858453 1466137 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1119 01:58:36.858525 1466137 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1119 01:58:37.075321 1466137 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-238225" context rescaled to 1 replicas
	I1119 01:58:37.158010 1466137 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1119 01:58:37.158079 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1119 01:58:37.301276 1466137 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1119 01:58:37.301352 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1119 01:58:37.416798 1466137 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 01:58:37.416876 1466137 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1119 01:58:37.667758 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1119 01:58:38.607366 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:39.223018 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.884468845s)
	I1119 01:58:39.223157 1466137 addons.go:480] Verifying addon registry=true in "addons-238225"
	I1119 01:58:39.223068 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.828540854s)
	I1119 01:58:39.223272 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.916046933s)
	I1119 01:58:39.228275 1466137 out.go:179] * Verifying registry addon...
	I1119 01:58:39.232025 1466137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1119 01:58:39.262709 1466137 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 01:58:39.262785 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:39.746504 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:40.220814 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.818436304s)
	I1119 01:58:40.220897 1466137 addons.go:480] Verifying addon ingress=true in "addons-238225"
	I1119 01:58:40.221108 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.81138061s)
	I1119 01:58:40.221253 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.796795426s)
	I1119 01:58:40.221275 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.788750544s)
	I1119 01:58:40.221291 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.786972985s)
	I1119 01:58:40.221322 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.786337245s)
	I1119 01:58:40.221371 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.701151224s)
	I1119 01:58:40.221414 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.653525773s)
	I1119 01:58:40.221442 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.339379477s)
	I1119 01:58:40.221489 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.268711484s)
	I1119 01:58:40.222239 1466137 addons.go:480] Verifying addon metrics-server=true in "addons-238225"
	I1119 01:58:40.221594 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.771780024s)
	W1119 01:58:40.222266 1466137 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 01:58:40.222298 1466137 retry.go:31] will retry after 282.047195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 01:58:40.224724 1466137 out.go:179] * Verifying ingress addon...
	I1119 01:58:40.226624 1466137 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-238225 service yakd-dashboard -n yakd-dashboard
	
	I1119 01:58:40.230425 1466137 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1119 01:58:40.243725 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:40.243987 1466137 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1119 01:58:40.243996 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 01:58:40.257296 1466137 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1119 01:58:40.504629 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 01:58:40.538351 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.87047648s)
	I1119 01:58:40.538383 1466137 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-238225"
	I1119 01:58:40.541410 1466137 out.go:179] * Verifying csi-hostpath-driver addon...
	I1119 01:58:40.544840 1466137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1119 01:58:40.558083 1466137 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 01:58:40.558108 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:40.737470 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:40.738014 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:41.049195 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:41.067868 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:41.234369 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:41.235667 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:41.548649 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:41.736651 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:41.737205 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:42.051737 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:42.239961 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:42.242447 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:42.257756 1466137 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1119 01:58:42.257922 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:42.282737 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:42.399500 1466137 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1119 01:58:42.413385 1466137 addons.go:239] Setting addon gcp-auth=true in "addons-238225"
	I1119 01:58:42.413435 1466137 host.go:66] Checking if "addons-238225" exists ...
	I1119 01:58:42.413911 1466137 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 01:58:42.431189 1466137 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1119 01:58:42.431245 1466137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 01:58:42.448514 1466137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 01:58:42.547894 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:42.733289 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:42.735080 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:43.048323 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:43.195257 1466137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.690534368s)
	I1119 01:58:43.198395 1466137 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 01:58:43.201248 1466137 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1119 01:58:43.203961 1466137 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1119 01:58:43.203979 1466137 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1119 01:58:43.216753 1466137 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1119 01:58:43.216775 1466137 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1119 01:58:43.229623 1466137 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 01:58:43.229643 1466137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1119 01:58:43.234277 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:43.236058 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:43.248794 1466137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 01:58:43.548775 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:43.568113 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:43.719633 1466137 addons.go:480] Verifying addon gcp-auth=true in "addons-238225"
	I1119 01:58:43.722936 1466137 out.go:179] * Verifying gcp-auth addon...
	I1119 01:58:43.727345 1466137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1119 01:58:43.731830 1466137 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1119 01:58:43.731896 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:43.734374 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:43.734933 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:44.048339 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:44.230892 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:44.232823 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:44.234956 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:44.548159 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:44.731057 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:44.733259 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:44.735280 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:45.049487 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:45.231088 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:45.234677 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:45.236687 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:45.548161 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:45.730900 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:45.733251 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:45.734960 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:46.048405 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:46.067060 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:46.231411 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:46.233454 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:46.234945 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:46.548133 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:46.731456 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:46.734984 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:46.735088 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:47.048228 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:47.231300 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:47.233792 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:47.235187 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:47.548446 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:47.730207 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:47.733810 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:47.734441 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:48.048890 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:48.068535 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:48.230526 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:48.232738 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:48.235031 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:48.548650 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:48.730271 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:48.733816 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:48.735160 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:49.048061 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:49.230633 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:49.232796 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:49.234902 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:49.548325 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:49.730617 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:49.732862 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:49.734390 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:50.048219 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:50.231312 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:50.234142 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:50.235068 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:50.547976 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:50.567641 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:50.730568 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:50.733130 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:50.734707 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:51.047783 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:51.230219 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:51.234233 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:51.234832 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:51.548843 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:51.730356 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:51.733790 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:51.734992 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:52.047881 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:52.230642 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:52.233095 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:52.234785 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:52.547736 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:52.730295 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:52.734175 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:52.734570 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:53.048573 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:53.067127 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:53.230994 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:53.233273 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:53.234149 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:53.548292 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:53.730446 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:53.734661 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:53.735142 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:54.048597 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:54.230577 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:54.234186 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:54.235160 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:54.548279 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:54.730438 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:54.734730 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:54.735719 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:55.047930 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:55.067728 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:55.230962 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:55.232707 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:55.234236 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:55.548618 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:55.730294 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:55.734060 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:55.734923 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:56.047724 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:56.230372 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:56.234786 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:56.235107 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:56.548107 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:56.730801 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:56.733080 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:56.735317 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:57.048721 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:57.230241 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:57.233843 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:57.234949 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:57.548593 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:58:57.567406 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:58:57.730287 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:57.733897 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:57.735202 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:58.048111 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:58.244038 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:58.244540 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:58.244820 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:58.547881 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:58.730861 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:58.733842 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:58.734590 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:59.048455 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:59.230046 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:59.233389 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:58:59.235414 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:59.548532 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:58:59.730294 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:58:59.734717 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:58:59.734896 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:00.110905 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:00.111198 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:00.231462 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:00.272599 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:00.273386 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:00.548718 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:00.730789 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:00.732992 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:00.734710 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:01.047703 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:01.231602 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:01.234714 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:01.237439 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:01.550469 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:01.730685 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:01.733188 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:01.735355 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:02.048498 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:02.230396 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:02.234896 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:02.235270 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:02.548351 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:02.567109 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:02.730963 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:02.734141 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:02.734450 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:03.049191 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:03.231023 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:03.233359 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:03.234876 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:03.548331 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:03.732016 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:03.733547 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:03.734364 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:04.048473 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:04.230947 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:04.233255 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:04.234929 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:04.548050 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:04.568001 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:04.731247 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:04.734450 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:04.734511 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:05.047848 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:05.230532 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:05.233113 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:05.234802 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:05.547896 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:05.730544 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:05.733439 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:05.738866 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:06.047914 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:06.230669 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:06.233145 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:06.235097 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:06.547921 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:06.730683 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:06.733131 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:06.734943 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:07.048071 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:07.066891 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:07.230495 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:07.232732 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:07.234261 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:07.548498 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:07.730416 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:07.732998 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:07.734852 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:08.048434 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:08.231034 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:08.233251 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:08.235248 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:08.548576 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:08.730972 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:08.732747 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:08.734510 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:09.048891 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:09.067908 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:09.230565 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:09.233069 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:09.234812 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:09.548113 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:09.730474 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:09.733782 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:09.734647 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:10.050566 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:10.230365 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:10.233478 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:10.235064 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:10.548330 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:10.730408 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:10.733986 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:10.735066 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:11.048066 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:11.231130 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:11.233168 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:11.234700 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:11.547857 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:11.567617 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:11.730491 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:11.732827 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:11.734655 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:12.047593 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:12.230416 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:12.234907 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:12.235848 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:12.548008 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:12.730539 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:12.732852 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:12.734622 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:13.047884 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:13.230459 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:13.232774 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:13.234376 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:13.548549 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:13.729980 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:13.733415 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:13.735110 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:14.047822 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1119 01:59:14.068009 1466137 node_ready.go:57] node "addons-238225" has "Ready":"False" status (will retry)
	I1119 01:59:14.230845 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:14.233028 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:14.234552 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:14.548387 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:14.731170 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:14.733331 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:14.734848 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:15.048079 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:15.290603 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:15.301057 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:15.302351 1466137 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 01:59:15.302410 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:15.609854 1466137 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 01:59:15.609940 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:15.613469 1466137 node_ready.go:49] node "addons-238225" is "Ready"
	I1119 01:59:15.613578 1466137 node_ready.go:38] duration metric: took 39.049396885s for node "addons-238225" to be "Ready" ...
	I1119 01:59:15.613627 1466137 api_server.go:52] waiting for apiserver process to appear ...
	I1119 01:59:15.613739 1466137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 01:59:15.634647 1466137 api_server.go:72] duration metric: took 41.555720057s to wait for apiserver process to appear ...
	I1119 01:59:15.634722 1466137 api_server.go:88] waiting for apiserver healthz status ...
	I1119 01:59:15.634755 1466137 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1119 01:59:15.647926 1466137 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1119 01:59:15.649282 1466137 api_server.go:141] control plane version: v1.34.1
	I1119 01:59:15.649348 1466137 api_server.go:131] duration metric: took 14.602732ms to wait for apiserver health ...
	I1119 01:59:15.649370 1466137 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 01:59:15.653618 1466137 system_pods.go:59] 19 kube-system pods found
	I1119 01:59:15.653706 1466137 system_pods.go:61] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:15.653741 1466137 system_pods.go:61] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:15.653761 1466137 system_pods.go:61] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending
	I1119 01:59:15.653790 1466137 system_pods.go:61] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending
	I1119 01:59:15.653826 1466137 system_pods.go:61] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:15.653858 1466137 system_pods.go:61] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:15.653878 1466137 system_pods.go:61] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:15.653909 1466137 system_pods.go:61] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:15.653931 1466137 system_pods.go:61] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending
	I1119 01:59:15.653967 1466137 system_pods.go:61] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:15.654000 1466137 system_pods.go:61] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:15.654020 1466137 system_pods.go:61] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending
	I1119 01:59:15.654038 1466137 system_pods.go:61] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending
	I1119 01:59:15.654093 1466137 system_pods.go:61] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending
	I1119 01:59:15.654125 1466137 system_pods.go:61] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:15.654166 1466137 system_pods.go:61] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending
	I1119 01:59:15.654196 1466137 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending
	I1119 01:59:15.654216 1466137 system_pods.go:61] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending
	I1119 01:59:15.654247 1466137 system_pods.go:61] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:59:15.654267 1466137 system_pods.go:74] duration metric: took 4.872396ms to wait for pod list to return data ...
	I1119 01:59:15.654306 1466137 default_sa.go:34] waiting for default service account to be created ...
	I1119 01:59:15.663108 1466137 default_sa.go:45] found service account: "default"
	I1119 01:59:15.663175 1466137 default_sa.go:55] duration metric: took 8.83056ms for default service account to be created ...
	I1119 01:59:15.663200 1466137 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 01:59:15.671074 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:15.671112 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:15.671123 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:15.671137 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending
	I1119 01:59:15.671142 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending
	I1119 01:59:15.671146 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:15.671152 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:15.671168 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:15.671173 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:15.671177 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending
	I1119 01:59:15.671190 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:15.671194 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:15.671199 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending
	I1119 01:59:15.671210 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending
	I1119 01:59:15.671218 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending
	I1119 01:59:15.671225 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:15.671229 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending
	I1119 01:59:15.671240 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending
	I1119 01:59:15.671249 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending
	I1119 01:59:15.671255 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:59:15.671282 1466137 retry.go:31] will retry after 221.734559ms: missing components: kube-dns
	I1119 01:59:15.742414 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:15.743664 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:15.744761 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:15.909429 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:15.909471 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:15.909481 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:15.909489 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:59:15.909496 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:59:15.909501 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:15.909544 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:15.909549 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:15.909556 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:15.909562 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:59:15.909567 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:15.909571 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:15.909576 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending
	I1119 01:59:15.909580 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending
	I1119 01:59:15.909584 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending
	I1119 01:59:15.909590 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:15.909594 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending
	I1119 01:59:15.909602 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:15.909609 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:15.909623 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:59:15.909641 1466137 retry.go:31] will retry after 272.31622ms: missing components: kube-dns
	I1119 01:59:16.051352 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:16.189098 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:16.189136 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:16.189147 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:16.189157 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:59:16.189164 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:59:16.189169 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:16.189174 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:16.189179 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:16.189183 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:16.189199 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:59:16.189203 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:16.189208 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:16.189222 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:59:16.189230 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:59:16.189240 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:59:16.189246 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:16.189254 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:59:16.189263 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:16.189271 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:16.189277 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:59:16.189293 1466137 retry.go:31] will retry after 366.562895ms: missing components: kube-dns
	I1119 01:59:16.230212 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:16.234426 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:16.238021 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:16.549021 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:16.567805 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:16.567842 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:16.567852 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:16.567890 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:59:16.567919 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:59:16.567929 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:16.567936 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:16.567940 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:16.568020 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:16.568080 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:59:16.568093 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:16.568114 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:16.568129 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:59:16.568136 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:59:16.568149 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:59:16.568171 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:16.568186 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:59:16.568202 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:16.568217 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:16.568238 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 01:59:16.568262 1466137 retry.go:31] will retry after 394.336323ms: missing components: kube-dns
	I1119 01:59:16.730846 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:16.733093 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:16.735164 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:16.968776 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:16.968822 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:16.968855 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:16.968873 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:59:16.968891 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:59:16.968902 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:16.968910 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:16.968943 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:16.968959 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:16.968970 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:59:16.968975 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:16.968986 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:16.968996 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:59:16.969017 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:59:16.969036 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:59:16.969049 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:16.969063 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:59:16.969079 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:16.969104 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:16.969117 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Running
	I1119 01:59:16.969151 1466137 retry.go:31] will retry after 601.534725ms: missing components: kube-dns
	I1119 01:59:17.068119 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:17.237128 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:17.237229 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:17.237307 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:17.549867 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:17.576125 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:17.576164 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 01:59:17.576173 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:17.576181 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:59:17.576217 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:59:17.576229 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:17.576235 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:17.576241 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:17.576250 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:17.576258 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:59:17.576262 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:17.576288 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:17.576302 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:59:17.576314 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:59:17.576327 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:59:17.576336 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:17.576346 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:59:17.576368 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:17.576385 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:17.576404 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Running
	I1119 01:59:17.576426 1466137 retry.go:31] will retry after 828.771953ms: missing components: kube-dns
	I1119 01:59:17.763007 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:17.763370 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:17.763475 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:18.056215 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:18.230940 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:18.233407 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:18.235401 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:18.409591 1466137 system_pods.go:86] 19 kube-system pods found
	I1119 01:59:18.409624 1466137 system_pods.go:89] "coredns-66bc5c9577-xmb7d" [005da4cd-c065-43b0-a68c-567b3aa0e823] Running
	I1119 01:59:18.409636 1466137 system_pods.go:89] "csi-hostpath-attacher-0" [25628434-04b4-4ee8-8e20-113140869edd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 01:59:18.409643 1466137 system_pods.go:89] "csi-hostpath-resizer-0" [63574fc5-5fe1-4ed1-b452-26f4c4a8dba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 01:59:18.409673 1466137 system_pods.go:89] "csi-hostpathplugin-rfpfq" [3b2a8c8d-41b1-4a04-b4a6-8200b7915ccf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 01:59:18.409699 1466137 system_pods.go:89] "etcd-addons-238225" [1ca33989-ddee-4b6c-84c9-0a02d2edc2f4] Running
	I1119 01:59:18.409708 1466137 system_pods.go:89] "kindnet-8wgcz" [461d42c1-3fe2-4a61-bece-95eede038f6e] Running
	I1119 01:59:18.409712 1466137 system_pods.go:89] "kube-apiserver-addons-238225" [9474f34f-a012-4109-9d35-907ab113f885] Running
	I1119 01:59:18.409719 1466137 system_pods.go:89] "kube-controller-manager-addons-238225" [e38b8531-6894-4942-8cf8-ee082fede3fe] Running
	I1119 01:59:18.409726 1466137 system_pods.go:89] "kube-ingress-dns-minikube" [7e42e918-7ccd-4d6a-905a-be5916f26ea5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 01:59:18.409730 1466137 system_pods.go:89] "kube-proxy-6dppw" [d8300433-a767-4a0d-a70d-d64b75617671] Running
	I1119 01:59:18.409744 1466137 system_pods.go:89] "kube-scheduler-addons-238225" [37f9b2a2-0d47-4390-82b8-d58af8c0e3fd] Running
	I1119 01:59:18.409751 1466137 system_pods.go:89] "metrics-server-85b7d694d7-wjr8r" [c1645465-d21f-488d-b849-db3aca1a5ba3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 01:59:18.409758 1466137 system_pods.go:89] "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 01:59:18.409768 1466137 system_pods.go:89] "registry-6b586f9694-2n7m4" [bda65628-e7f7-4672-860f-daef7b6a78b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 01:59:18.409777 1466137 system_pods.go:89] "registry-creds-764b6fb674-6dd8r" [ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 01:59:18.409782 1466137 system_pods.go:89] "registry-proxy-7m7l6" [7c2e134e-77c6-414b-9341-2e7db32808cd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 01:59:18.409797 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5fsqs" [1f5e980a-d8cb-48c7-9838-77769445e689] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:18.409804 1466137 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x5sfx" [b9e2955c-41d4-4abd-b64a-a8bcb9ba52ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 01:59:18.409810 1466137 system_pods.go:89] "storage-provisioner" [e5590246-55a0-4bdc-87e4-844adc590229] Running
	I1119 01:59:18.409820 1466137 system_pods.go:126] duration metric: took 2.746601265s to wait for k8s-apps to be running ...
	I1119 01:59:18.409832 1466137 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 01:59:18.409894 1466137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 01:59:18.422625 1466137 system_svc.go:56] duration metric: took 12.78417ms WaitForService to wait for kubelet
	I1119 01:59:18.422651 1466137 kubeadm.go:587] duration metric: took 44.343735493s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 01:59:18.422691 1466137 node_conditions.go:102] verifying NodePressure condition ...
	I1119 01:59:18.425390 1466137 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 01:59:18.425420 1466137 node_conditions.go:123] node cpu capacity is 2
	I1119 01:59:18.425435 1466137 node_conditions.go:105] duration metric: took 2.725624ms to run NodePressure ...
	I1119 01:59:18.425448 1466137 start.go:242] waiting for startup goroutines ...
	I1119 01:59:18.548772 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:18.730879 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:18.733157 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:18.735656 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:19.049002 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:19.232426 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:19.233801 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:19.234926 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:19.549308 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:19.730480 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:19.732893 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:19.735301 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:20.049624 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:20.231331 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:20.234128 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:20.235046 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:20.548349 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:20.730464 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:20.733470 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:20.735547 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:21.048196 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:21.231458 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:21.234973 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:21.237869 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:21.548644 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:21.731738 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:21.734645 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:21.736160 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:22.048946 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:22.231616 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:22.233662 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:22.236074 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:22.548683 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:22.730901 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:22.733817 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:22.737792 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:23.048443 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:23.230778 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:23.233469 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:23.235623 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:23.548854 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:23.730758 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:23.733280 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:23.735183 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:24.049105 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:24.231457 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:24.234469 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:24.235642 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:24.548144 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:24.732451 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:24.735749 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:24.736194 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:25.049579 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:25.230811 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:25.233540 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:25.235460 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:25.549395 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:25.730323 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:25.736356 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:25.737404 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:26.050374 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:26.231204 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:26.235444 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:26.236401 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:26.548636 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:26.730467 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:26.734042 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:26.736250 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:27.049260 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:27.230504 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:27.234221 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:27.235950 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:27.549251 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:27.730402 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:27.736290 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:27.736784 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:28.049389 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:28.231161 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:28.234690 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:28.236884 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:28.550335 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:28.734513 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:28.736941 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:28.738290 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:29.049647 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:29.234811 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:29.238669 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:29.239639 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:29.552590 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:29.734478 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:29.736981 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:29.738391 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:30.050018 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:30.235006 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:30.235422 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:30.238186 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:30.549879 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:30.741657 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:30.742893 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:30.743006 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:31.048984 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:31.231016 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:31.233230 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:31.234754 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:31.549662 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:31.744683 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:31.744814 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:31.745146 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:32.049414 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:32.235748 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:32.235835 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:32.235989 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:32.549137 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:32.741997 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:32.749980 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:32.752299 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:33.048996 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:33.233422 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:33.242643 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:33.243107 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:33.550275 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:33.732988 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:33.735922 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:33.736522 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:34.048930 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:34.231433 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:34.234803 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:34.236820 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:34.548504 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:34.730967 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:34.733593 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:34.735862 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:35.049220 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:35.231793 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:35.238202 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:35.238783 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:35.548609 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:35.730915 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:35.734605 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:35.736389 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:36.057881 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:36.231145 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:36.235019 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:36.235853 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:36.548125 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:36.731212 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:36.734857 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:36.735012 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:37.048953 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:37.231197 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:37.234563 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:37.235978 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:37.549612 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:37.730324 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:37.733928 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:37.735294 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:38.049749 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:38.231559 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:38.237056 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:38.237429 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:38.549406 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:38.730455 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:38.733201 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:38.735157 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:39.048752 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:39.231212 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:39.239420 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:39.239791 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:39.548288 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:39.730283 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:39.734660 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:39.736197 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:40.050088 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:40.231730 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:40.234367 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:40.236837 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:40.548586 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:40.730547 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:40.732927 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:40.735063 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:41.049005 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:41.231085 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:41.234056 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:41.235427 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:41.549829 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:41.732071 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:41.733494 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:41.735501 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:42.049366 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:42.235531 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:42.255794 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:42.256352 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:42.548655 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:42.730394 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:42.734410 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:42.735631 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:43.048898 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:43.231228 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:43.233874 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:43.236054 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:43.548775 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:43.730950 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:43.734674 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:43.736369 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:44.049257 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:44.230240 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:44.235683 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:44.236248 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:44.548765 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:44.731112 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:44.735279 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:44.735600 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:45.050504 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:45.251021 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:45.252066 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:45.252668 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:45.547967 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:45.731170 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:45.734374 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:45.735606 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:46.048773 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:46.231127 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:46.233425 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:46.234743 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:46.548433 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:46.730617 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:46.734091 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:46.735210 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:47.048722 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:47.231016 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:47.236860 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:47.237394 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:47.549262 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:47.732867 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:47.733704 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:47.734941 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:48.048933 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:48.231410 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:48.234873 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:48.235771 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:48.548119 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:48.730981 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:48.733438 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:48.735195 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:49.048386 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:49.236342 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:49.236591 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:49.236973 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:49.548486 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:49.730411 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:49.735452 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:49.737881 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:50.048893 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:50.230557 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:50.234236 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:50.235811 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:50.548628 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:50.731102 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:50.733230 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:50.735357 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:51.049294 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:51.230027 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:51.233277 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:51.234866 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:51.548453 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:51.730303 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:51.734152 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:51.735778 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:52.049191 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:52.231196 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:52.234627 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:52.235685 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:52.548605 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:52.730724 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:52.732969 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:52.735163 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:53.049609 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:53.230354 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:53.233909 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:53.236097 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:53.548260 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:53.731377 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:53.737583 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:53.740453 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 01:59:54.049146 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:54.230853 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:54.233089 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:54.244332 1466137 kapi.go:107] duration metric: took 1m15.012306387s to wait for kubernetes.io/minikube-addons=registry ...
	I1119 01:59:54.550633 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:54.731453 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:54.733651 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:55.048231 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:55.232363 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:55.233713 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:55.548358 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:55.731047 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:55.733264 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:56.050713 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:56.230349 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:56.233934 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:56.548787 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:56.731060 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:56.733320 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:57.048825 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:57.231404 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:57.233614 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:57.548254 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:57.730979 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:57.733150 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:58.049287 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:58.230279 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:58.233700 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:58.549144 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:58.731068 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:58.733036 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:59.048824 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:59.232582 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:59.234606 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 01:59:59.548036 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 01:59:59.732228 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 01:59:59.738081 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:00.073168 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:00.258113 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:00.258834 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:00.554400 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:00.754196 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:00.754355 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:01.056243 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:01.240979 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:01.241298 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:01.549391 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:01.759748 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:01.759884 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:02.054470 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:02.231962 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:02.234915 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:02.551446 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:02.731042 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:02.734571 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:03.049973 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:03.232343 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:03.233828 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:03.548299 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:03.734666 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 02:00:03.738204 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:04.086316 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:04.230871 1466137 kapi.go:107] duration metric: took 1m20.503523344s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1119 02:00:04.233385 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:04.234452 1466137 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-238225 cluster.
	I1119 02:00:04.237397 1466137 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1119 02:00:04.240322 1466137 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1119 02:00:04.548634 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:04.734045 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:05.049056 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:05.234501 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:05.556461 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:05.740320 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:06.061247 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:06.234922 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:06.548477 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:06.734297 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:07.054308 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:07.234200 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:07.551851 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:07.734067 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:08.048414 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:08.233332 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:08.549149 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:08.734676 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:09.052919 1466137 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 02:00:09.234271 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:09.548734 1466137 kapi.go:107] duration metric: took 1m29.00389633s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1119 02:00:09.733719 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:10.234086 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:10.734549 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:11.239264 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:11.733628 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:12.234125 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:12.733958 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:13.233316 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:13.733676 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:14.234058 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:14.734015 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:15.233650 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:15.734239 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:16.234308 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:16.733727 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:17.234424 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:17.733576 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:18.234350 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:18.734354 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:19.234003 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:19.733680 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:20.234103 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:20.733825 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:21.234935 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:21.734629 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:22.234454 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:22.733567 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:23.233819 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:23.733674 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:24.233943 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:24.735087 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:25.233592 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:25.734571 1466137 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 02:00:26.234828 1466137 kapi.go:107] duration metric: took 1m46.004400513s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1119 02:00:26.237912 1466137 out.go:179] * Enabled addons: ingress-dns, inspektor-gadget, cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1119 02:00:26.241051 1466137 addons.go:515] duration metric: took 1m52.161642314s for enable addons: enabled=[ingress-dns inspektor-gadget cloud-spanner amd-gpu-device-plugin nvidia-device-plugin registry-creds storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1119 02:00:26.241146 1466137 start.go:247] waiting for cluster config update ...
	I1119 02:00:26.241173 1466137 start.go:256] writing updated cluster config ...
	I1119 02:00:26.241501 1466137 ssh_runner.go:195] Run: rm -f paused
	I1119 02:00:26.246110 1466137 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:00:26.249496 1466137 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xmb7d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.254056 1466137 pod_ready.go:94] pod "coredns-66bc5c9577-xmb7d" is "Ready"
	I1119 02:00:26.254127 1466137 pod_ready.go:86] duration metric: took 4.567261ms for pod "coredns-66bc5c9577-xmb7d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.256333 1466137 pod_ready.go:83] waiting for pod "etcd-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.263228 1466137 pod_ready.go:94] pod "etcd-addons-238225" is "Ready"
	I1119 02:00:26.263255 1466137 pod_ready.go:86] duration metric: took 6.899514ms for pod "etcd-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.265932 1466137 pod_ready.go:83] waiting for pod "kube-apiserver-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.270854 1466137 pod_ready.go:94] pod "kube-apiserver-addons-238225" is "Ready"
	I1119 02:00:26.270879 1466137 pod_ready.go:86] duration metric: took 4.919421ms for pod "kube-apiserver-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.273537 1466137 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.650708 1466137 pod_ready.go:94] pod "kube-controller-manager-addons-238225" is "Ready"
	I1119 02:00:26.650735 1466137 pod_ready.go:86] duration metric: took 377.172372ms for pod "kube-controller-manager-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:26.849990 1466137 pod_ready.go:83] waiting for pod "kube-proxy-6dppw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:27.250626 1466137 pod_ready.go:94] pod "kube-proxy-6dppw" is "Ready"
	I1119 02:00:27.250656 1466137 pod_ready.go:86] duration metric: took 400.636804ms for pod "kube-proxy-6dppw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:27.500641 1466137 pod_ready.go:83] waiting for pod "kube-scheduler-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:27.850064 1466137 pod_ready.go:94] pod "kube-scheduler-addons-238225" is "Ready"
	I1119 02:00:27.850132 1466137 pod_ready.go:86] duration metric: took 349.462849ms for pod "kube-scheduler-addons-238225" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:00:27.850153 1466137 pod_ready.go:40] duration metric: took 1.60400953s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:00:27.918164 1466137 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 02:00:27.921455 1466137 out.go:179] * Done! kubectl is now configured to use "addons-238225" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 02:00:51 addons-238225 crio[827]: time="2025-11-19T02:00:51.352023295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:00:51 addons-238225 crio[827]: time="2025-11-19T02:00:51.360577531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:00:51 addons-238225 crio[827]: time="2025-11-19T02:00:51.361234223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:00:51 addons-238225 crio[827]: time="2025-11-19T02:00:51.378502609Z" level=info msg="Created container 6e3119ea8baa23e5cf3fc989b3eaab5f9efd385886b22f5ec1333802e59dc373: default/registry-test/registry-test" id=557be6c4-ff9c-4b7a-bd6e-d9c940f60efc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:00:51 addons-238225 crio[827]: time="2025-11-19T02:00:51.379673966Z" level=info msg="Starting container: 6e3119ea8baa23e5cf3fc989b3eaab5f9efd385886b22f5ec1333802e59dc373" id=ad3c4f6c-42e0-491e-b4a3-f230f3800c1d name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:00:51 addons-238225 crio[827]: time="2025-11-19T02:00:51.381763011Z" level=info msg="Started container" PID=5177 containerID=6e3119ea8baa23e5cf3fc989b3eaab5f9efd385886b22f5ec1333802e59dc373 description=default/registry-test/registry-test id=ad3c4f6c-42e0-491e-b4a3-f230f3800c1d name=/runtime.v1.RuntimeService/StartContainer sandboxID=a579de6d65ea4b5874482fdff9114548de8ab2f081c34d084d7fd9e92d6aa2cc
	Nov 19 02:00:52 addons-238225 crio[827]: time="2025-11-19T02:00:52.144661386Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626/POD" id=bcf867cf-3de1-4249-9611-cc8820d021e0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:00:52 addons-238225 crio[827]: time="2025-11-19T02:00:52.144739972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:00:52 addons-238225 crio[827]: time="2025-11-19T02:00:52.154125885Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626 Namespace:local-path-storage ID:d33a215608296b34db5ea60b12f03a4977f70f5abd42994cca8a3c04b3965dc1 UID:2822ff85-be16-4469-903a-671f59bca12e NetNS:/var/run/netns/194f6b89-e35e-437f-8dc0-cc567ab919c1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000276b90}] Aliases:map[]}"
	Nov 19 02:00:52 addons-238225 crio[827]: time="2025-11-19T02:00:52.154167623Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626 to CNI network \"kindnet\" (type=ptp)"
	Nov 19 02:00:52 addons-238225 crio[827]: time="2025-11-19T02:00:52.164542598Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626 Namespace:local-path-storage ID:d33a215608296b34db5ea60b12f03a4977f70f5abd42994cca8a3c04b3965dc1 UID:2822ff85-be16-4469-903a-671f59bca12e NetNS:/var/run/netns/194f6b89-e35e-437f-8dc0-cc567ab919c1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000276b90}] Aliases:map[]}"
	Nov 19 02:00:52 addons-238225 crio[827]: time="2025-11-19T02:00:52.164687865Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626 for CNI network kindnet (type=ptp)"
	Nov 19 02:00:52 addons-238225 crio[827]: time="2025-11-19T02:00:52.168134Z" level=info msg="Ran pod sandbox d33a215608296b34db5ea60b12f03a4977f70f5abd42994cca8a3c04b3965dc1 with infra container: local-path-storage/helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626/POD" id=bcf867cf-3de1-4249-9611-cc8820d021e0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:00:52 addons-238225 crio[827]: time="2025-11-19T02:00:52.171685814Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=2791f6e3-0392-4dab-9647-19784a40afbf name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:00:52 addons-238225 crio[827]: time="2025-11-19T02:00:52.171850781Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=2791f6e3-0392-4dab-9647-19784a40afbf name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:00:52 addons-238225 crio[827]: time="2025-11-19T02:00:52.1719006Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=2791f6e3-0392-4dab-9647-19784a40afbf name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:00:52 addons-238225 crio[827]: time="2025-11-19T02:00:52.172730701Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=92c58e4f-c4bb-46b6-b080-9a134bf984e2 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:00:52 addons-238225 crio[827]: time="2025-11-19T02:00:52.175385495Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Nov 19 02:00:53 addons-238225 crio[827]: time="2025-11-19T02:00:53.248741529Z" level=info msg="Stopping pod sandbox: a579de6d65ea4b5874482fdff9114548de8ab2f081c34d084d7fd9e92d6aa2cc" id=fb80f24f-1a60-4120-aaa6-a3a977317400 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 02:00:53 addons-238225 crio[827]: time="2025-11-19T02:00:53.249041679Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:a579de6d65ea4b5874482fdff9114548de8ab2f081c34d084d7fd9e92d6aa2cc UID:2baf8cd1-8924-4fb3-82d9-700c13fe0f27 NetNS:/var/run/netns/0b109a6d-b0ae-4c9f-9db3-f6c889fa5525 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000adad38}] Aliases:map[]}"
	Nov 19 02:00:53 addons-238225 crio[827]: time="2025-11-19T02:00:53.249176624Z" level=info msg="Deleting pod default_registry-test from CNI network \"kindnet\" (type=ptp)"
	Nov 19 02:00:53 addons-238225 crio[827]: time="2025-11-19T02:00:53.271600224Z" level=info msg="Stopped pod sandbox: a579de6d65ea4b5874482fdff9114548de8ab2f081c34d084d7fd9e92d6aa2cc" id=fb80f24f-1a60-4120-aaa6-a3a977317400 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 02:00:54 addons-238225 crio[827]: time="2025-11-19T02:00:54.255433292Z" level=info msg="Removing container: 6e3119ea8baa23e5cf3fc989b3eaab5f9efd385886b22f5ec1333802e59dc373" id=fa71cf90-510f-42d1-ad83-c47cf5456ee6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:00:54 addons-238225 crio[827]: time="2025-11-19T02:00:54.258216845Z" level=info msg="Error loading conmon cgroup of container 6e3119ea8baa23e5cf3fc989b3eaab5f9efd385886b22f5ec1333802e59dc373: cgroup deleted" id=fa71cf90-510f-42d1-ad83-c47cf5456ee6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:00:54 addons-238225 crio[827]: time="2025-11-19T02:00:54.262777075Z" level=info msg="Removed container 6e3119ea8baa23e5cf3fc989b3eaab5f9efd385886b22f5ec1333802e59dc373: default/registry-test/registry-test" id=fa71cf90-510f-42d1-ad83-c47cf5456ee6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	27de6957d02e9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          29 seconds ago       Running             busybox                                  0                   639abf5992c53       busybox                                    default
	daa6bf3f2ef67       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             35 seconds ago       Running             controller                               0                   2aacd67009a16       ingress-nginx-controller-6c8bf45fb-gsl4s   ingress-nginx
	9c232d33326a7       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          51 seconds ago       Running             csi-snapshotter                          0                   4572422659b3b       csi-hostpathplugin-rfpfq                   kube-system
	772cfe62f02aa       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          53 seconds ago       Running             csi-provisioner                          0                   4572422659b3b       csi-hostpathplugin-rfpfq                   kube-system
	21ceae69f9b81       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            54 seconds ago       Running             liveness-probe                           0                   4572422659b3b       csi-hostpathplugin-rfpfq                   kube-system
	d64b782c68c24       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           55 seconds ago       Running             hostpath                                 0                   4572422659b3b       csi-hostpathplugin-rfpfq                   kube-system
	aa2392de9c092       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 57 seconds ago       Running             gcp-auth                                 0                   ef5b13b05972f       gcp-auth-78565c9fb4-nq6z4                  gcp-auth
	2d6558779eef9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            About a minute ago   Running             gadget                                   0                   84a8a4e10dbde       gadget-9r7cc                               gadget
	5e8f0f7f44431       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   4572422659b3b       csi-hostpathplugin-rfpfq                   kube-system
	7011495914d49       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             About a minute ago   Exited              patch                                    1                   12993dbcf9ac8       ingress-nginx-admission-patch-vwbkh        ingress-nginx
	25d05604a9dbf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   5db638a46bc3a       ingress-nginx-admission-create-7dtkx       ingress-nginx
	b38eaf566b86b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   3fb5e98f451bb       registry-proxy-7m7l6                       kube-system
	4965ee5c7f78b       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   0cc824b0072b3       cloud-spanner-emulator-6f9fcf858b-hklsv    default
	913d1dc20a3a2       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   6e3609ae0e684       nvidia-device-plugin-daemonset-fb27k       kube-system
	d4baa1f0a47d3       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   25d89c4d90a2c       registry-6b586f9694-2n7m4                  kube-system
	91ecb63aa939e       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   931506926cbcd       snapshot-controller-7d9fbc56b8-5fsqs       kube-system
	c53519ba9e004       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   08242009eb98f       csi-hostpath-resizer-0                     kube-system
	3405c2a94e1bf       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   5d627c54ecfea       yakd-dashboard-5ff678cb9-97cnn             yakd-dashboard
	dcca1b842fe44       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   4572422659b3b       csi-hostpathplugin-rfpfq                   kube-system
	e4c59f62ececb       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   6c70658571a4e       metrics-server-85b7d694d7-wjr8r            kube-system
	e46dac206369b       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   c2b909d8d7192       local-path-provisioner-648f6765c9-t2frb    local-path-storage
	be9d5b6bedfbc       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   3c88179b82e2a       kube-ingress-dns-minikube                  kube-system
	d79a6a486de50       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   0358d181f6f4b       snapshot-controller-7d9fbc56b8-x5sfx       kube-system
	99e3704db1eb4       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   bc61a4410b44b       csi-hostpath-attacher-0                    kube-system
	b94070f6dc6d4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   197105b85621a       coredns-66bc5c9577-xmb7d                   kube-system
	d0f307f4b6c34       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   0dff9d40ff5e5       storage-provisioner                        kube-system
	0c54dc25c8ad5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   fadc826f254d1       kube-proxy-6dppw                           kube-system
	a841f7bd1c931       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   7ed860b050687       kindnet-8wgcz                              kube-system
	76ee598a60e1e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   1b70a5487703d       kube-scheduler-addons-238225               kube-system
	a757a1a6114f8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   ae1622a386a04       kube-apiserver-addons-238225               kube-system
	7a77a55a81c01       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   1df5d8abbca42       kube-controller-manager-addons-238225      kube-system
	85abfad90a4c2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   47b2af3c18cf3       etcd-addons-238225                         kube-system
	
	
	==> coredns [b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96] <==
	[INFO] 10.244.0.12:48347 - 16613 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002687744s
	[INFO] 10.244.0.12:48347 - 60105 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000109789s
	[INFO] 10.244.0.12:48347 - 1641 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000106137s
	[INFO] 10.244.0.12:50412 - 33421 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000275486s
	[INFO] 10.244.0.12:50412 - 33199 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000311193s
	[INFO] 10.244.0.12:33771 - 59317 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108172s
	[INFO] 10.244.0.12:33771 - 59128 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081122s
	[INFO] 10.244.0.12:53249 - 32158 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101601s
	[INFO] 10.244.0.12:53249 - 31986 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082796s
	[INFO] 10.244.0.12:47553 - 3677 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001327076s
	[INFO] 10.244.0.12:47553 - 3480 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.0013841s
	[INFO] 10.244.0.12:43508 - 33585 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000111463s
	[INFO] 10.244.0.12:43508 - 33388 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000291485s
	[INFO] 10.244.0.20:35290 - 30133 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000197187s
	[INFO] 10.244.0.20:53840 - 41496 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000129801s
	[INFO] 10.244.0.20:41539 - 24321 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000173237s
	[INFO] 10.244.0.20:59633 - 9891 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000249337s
	[INFO] 10.244.0.20:42981 - 21698 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00018945s
	[INFO] 10.244.0.20:33488 - 2307 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000345572s
	[INFO] 10.244.0.20:54299 - 57282 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001920467s
	[INFO] 10.244.0.20:35789 - 26998 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002393608s
	[INFO] 10.244.0.20:57162 - 53250 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003201679s
	[INFO] 10.244.0.20:40510 - 521 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003184408s
	[INFO] 10.244.0.23:60718 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000198812s
	[INFO] 10.244.0.23:58164 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164934s
	
	
	==> describe nodes <==
	Name:               addons-238225
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-238225
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=addons-238225
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T01_58_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-238225
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-238225"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 01:58:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-238225
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:00:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:00:32 +0000   Wed, 19 Nov 2025 01:58:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:00:32 +0000   Wed, 19 Nov 2025 01:58:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:00:32 +0000   Wed, 19 Nov 2025 01:58:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:00:32 +0000   Wed, 19 Nov 2025 01:59:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-238225
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                b9b6b0e2-598d-450e-a134-2ff248f1e4ea
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     cloud-spanner-emulator-6f9fcf858b-hklsv                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-9r7cc                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gcp-auth                    gcp-auth-78565c9fb4-nq6z4                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-gsl4s                      100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m21s
	  kube-system                 coredns-66bc5c9577-xmb7d                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m27s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 csi-hostpathplugin-rfpfq                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 etcd-addons-238225                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m32s
	  kube-system                 kindnet-8wgcz                                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m28s
	  kube-system                 kube-apiserver-addons-238225                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-addons-238225                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-6dppw                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-scheduler-addons-238225                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 metrics-server-85b7d694d7-wjr8r                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m22s
	  kube-system                 nvidia-device-plugin-daemonset-fb27k                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 registry-6b586f9694-2n7m4                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 registry-creds-764b6fb674-6dd8r                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 registry-proxy-7m7l6                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 snapshot-controller-7d9fbc56b8-5fsqs                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 snapshot-controller-7d9fbc56b8-x5sfx                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  local-path-storage          helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  local-path-storage          local-path-provisioner-648f6765c9-t2frb                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-97cnn                                0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m25s  kube-proxy       
	  Normal   Starting                 2m32s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m32s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m32s  kubelet          Node addons-238225 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s  kubelet          Node addons-238225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s  kubelet          Node addons-238225 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m29s  node-controller  Node addons-238225 event: Registered Node addons-238225 in Controller
	  Normal   NodeReady                106s   kubelet          Node addons-238225 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 01:56] kauditd_printk_skb: 8 callbacks suppressed
	[Nov19 01:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1] <==
	{"level":"warn","ts":"2025-11-19T01:58:24.379140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.395187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.414512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.430538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.449240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.471096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.489374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.506557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.515826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.541855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.558207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.568677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.590175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.602103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.627010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.674568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.743992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.754236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:24.939765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:40.806291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:58:40.824309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:59:02.992547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:59:03.011383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:59:03.033152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T01:59:03.048528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36060","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [aa2392de9c0927f647992fb8c60e46f193c1b89e2d5d0b8d69a02b9222a4fd7b] <==
	2025/11/19 02:00:03 GCP Auth Webhook started!
	2025/11/19 02:00:28 Ready to marshal response ...
	2025/11/19 02:00:28 Ready to write response ...
	2025/11/19 02:00:28 Ready to marshal response ...
	2025/11/19 02:00:28 Ready to write response ...
	2025/11/19 02:00:28 Ready to marshal response ...
	2025/11/19 02:00:28 Ready to write response ...
	2025/11/19 02:00:48 Ready to marshal response ...
	2025/11/19 02:00:48 Ready to write response ...
	2025/11/19 02:00:51 Ready to marshal response ...
	2025/11/19 02:00:51 Ready to write response ...
	2025/11/19 02:00:51 Ready to marshal response ...
	2025/11/19 02:00:51 Ready to write response ...
	
	
	==> kernel <==
	 02:01:01 up  9:43,  0 user,  load average: 2.67, 1.65, 1.05
	Linux addons-238225 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884] <==
	I1119 01:59:06.191177       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 01:59:06.191275       1 metrics.go:72] Registering metrics
	I1119 01:59:06.191347       1 controller.go:711] "Syncing nftables rules"
	I1119 01:59:14.679196       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:59:14.679248       1 main.go:301] handling current node
	I1119 01:59:24.677775       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:59:24.677821       1 main.go:301] handling current node
	I1119 01:59:34.677320       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:59:34.677485       1 main.go:301] handling current node
	I1119 01:59:44.678232       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:59:44.678274       1 main.go:301] handling current node
	I1119 01:59:54.677592       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 01:59:54.677628       1 main.go:301] handling current node
	I1119 02:00:04.677483       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:00:04.677537       1 main.go:301] handling current node
	I1119 02:00:14.677777       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:00:14.677815       1 main.go:301] handling current node
	I1119 02:00:24.678240       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:00:24.678279       1 main.go:301] handling current node
	I1119 02:00:34.678047       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:00:34.678083       1 main.go:301] handling current node
	I1119 02:00:44.677582       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:00:44.677616       1 main.go:301] handling current node
	I1119 02:00:54.678372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:00:54.678404       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4] <==
	I1119 01:58:40.086004       1 controller.go:667] quota admission added evaluator for: jobs.batch
	I1119 01:58:40.394792       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.106.211.126"}
	I1119 01:58:40.406650       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1119 01:58:40.480626       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.106.209.74"}
	W1119 01:58:40.796788       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1119 01:58:40.816903       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1119 01:58:43.601093       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.24.64"}
	W1119 01:59:02.992019       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:59:03.009396       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:59:03.032916       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:59:03.048771       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 01:59:15.214761       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.24.64:443: connect: connection refused
	E1119 01:59:15.215016       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.24.64:443: connect: connection refused" logger="UnhandledError"
	W1119 01:59:15.217456       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.24.64:443: connect: connection refused
	E1119 01:59:15.217500       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.24.64:443: connect: connection refused" logger="UnhandledError"
	W1119 01:59:15.352912       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.24.64:443: connect: connection refused
	E1119 01:59:15.352958       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.24.64:443: connect: connection refused" logger="UnhandledError"
	W1119 01:59:33.819937       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 01:59:33.820005       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1119 01:59:33.821258       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.20.157:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.20.157:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.20.157:443: connect: connection refused" logger="UnhandledError"
	I1119 01:59:33.874873       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1119 01:59:33.888355       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-controller-manager [7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b] <==
	I1119 01:58:33.020431       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 01:58:33.020465       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 01:58:33.020914       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 01:58:33.021020       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 01:58:33.021244       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 01:58:33.021433       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 01:58:33.021618       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 01:58:33.021911       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 01:58:33.021949       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 01:58:33.022264       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 01:58:33.022822       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 01:58:33.023244       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 01:58:33.023282       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	E1119 01:58:39.076273       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1119 01:58:39.115632       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1119 01:59:02.984925       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 01:59:02.985193       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1119 01:59:02.985259       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1119 01:59:03.017020       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1119 01:59:03.021952       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1119 01:59:03.085486       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 01:59:03.123158       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 01:59:17.985530       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1119 01:59:33.090133       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 01:59:33.134845       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181] <==
	I1119 01:58:35.092774       1 server_linux.go:53] "Using iptables proxy"
	I1119 01:58:35.223803       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 01:58:35.324676       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 01:58:35.324709       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 01:58:35.324774       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 01:58:35.470235       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 01:58:35.470295       1 server_linux.go:132] "Using iptables Proxier"
	I1119 01:58:35.481094       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 01:58:35.481430       1 server.go:527] "Version info" version="v1.34.1"
	I1119 01:58:35.481446       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 01:58:35.495052       1 config.go:200] "Starting service config controller"
	I1119 01:58:35.495078       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 01:58:35.495097       1 config.go:106] "Starting endpoint slice config controller"
	I1119 01:58:35.495101       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 01:58:35.495113       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 01:58:35.495116       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 01:58:35.495831       1 config.go:309] "Starting node config controller"
	I1119 01:58:35.495844       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 01:58:35.495851       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 01:58:35.595418       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 01:58:35.595454       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 01:58:35.595496       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1] <==
	I1119 01:58:26.775771       1 serving.go:386] Generated self-signed cert in-memory
	I1119 01:58:28.528281       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 01:58:28.528313       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 01:58:28.534196       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 01:58:28.534311       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 01:58:28.534378       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 01:58:28.534414       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 01:58:28.534453       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 01:58:28.534496       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 01:58:28.534638       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 01:58:28.534708       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 01:58:28.635021       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 01:58:28.635093       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 01:58:28.635433       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:00:20 addons-238225 kubelet[1266]: E1119 02:00:20.267148    1266 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 19 02:00:20 addons-238225 kubelet[1266]: E1119 02:00:20.267240    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7-gcr-creds podName:ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7 nodeName:}" failed. No retries permitted until 2025-11-19 02:01:24.267221754 +0000 UTC m=+175.101976437 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7-gcr-creds") pod "registry-creds-764b6fb674-6dd8r" (UID: "ca3fc492-0471-4cd7-8f7f-c6c9a672b1c7") : secret "registry-creds-gcr" not found
	Nov 19 02:00:26 addons-238225 kubelet[1266]: I1119 02:00:26.169682    1266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-rfpfq" podStartSLOduration=18.180198621 podStartE2EDuration="1m11.169573548s" podCreationTimestamp="2025-11-19 01:59:15 +0000 UTC" firstStartedPulling="2025-11-19 01:59:15.983016618 +0000 UTC m=+46.817771300" lastFinishedPulling="2025-11-19 02:00:08.972391544 +0000 UTC m=+99.807146227" observedRunningTime="2025-11-19 02:00:09.098473845 +0000 UTC m=+99.933228544" watchObservedRunningTime="2025-11-19 02:00:26.169573548 +0000 UTC m=+117.004328239"
	Nov 19 02:00:28 addons-238225 kubelet[1266]: I1119 02:00:28.905586    1266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-gsl4s" podStartSLOduration=102.772155486 podStartE2EDuration="1m48.905564296s" podCreationTimestamp="2025-11-19 01:58:40 +0000 UTC" firstStartedPulling="2025-11-19 02:00:19.567598635 +0000 UTC m=+110.402353317" lastFinishedPulling="2025-11-19 02:00:25.701007444 +0000 UTC m=+116.535762127" observedRunningTime="2025-11-19 02:00:26.169963803 +0000 UTC m=+117.004718486" watchObservedRunningTime="2025-11-19 02:00:28.905564296 +0000 UTC m=+119.740318978"
	Nov 19 02:00:29 addons-238225 kubelet[1266]: I1119 02:00:29.055147    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpmfc\" (UniqueName: \"kubernetes.io/projected/99db9ee7-11e8-4a19-b431-99c0b121ef76-kube-api-access-jpmfc\") pod \"busybox\" (UID: \"99db9ee7-11e8-4a19-b431-99c0b121ef76\") " pod="default/busybox"
	Nov 19 02:00:29 addons-238225 kubelet[1266]: I1119 02:00:29.055446    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/99db9ee7-11e8-4a19-b431-99c0b121ef76-gcp-creds\") pod \"busybox\" (UID: \"99db9ee7-11e8-4a19-b431-99c0b121ef76\") " pod="default/busybox"
	Nov 19 02:00:29 addons-238225 kubelet[1266]: I1119 02:00:29.347958    1266 scope.go:117] "RemoveContainer" containerID="05ba16188c1cd42a17f21fc04d83a25b7f47fe039e26223ffcaf9b2bc744f0cc"
	Nov 19 02:00:29 addons-238225 kubelet[1266]: I1119 02:00:29.362963    1266 scope.go:117] "RemoveContainer" containerID="b2c6c77a55e9f16ec525a5eb0ae9a94b342992229d85e22d6bbd2cf8fdaa4613"
	Nov 19 02:00:38 addons-238225 kubelet[1266]: I1119 02:00:38.171557    1266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=8.194201486 podStartE2EDuration="10.171539663s" podCreationTimestamp="2025-11-19 02:00:28 +0000 UTC" firstStartedPulling="2025-11-19 02:00:29.243621558 +0000 UTC m=+120.078376241" lastFinishedPulling="2025-11-19 02:00:31.220959727 +0000 UTC m=+122.055714418" observedRunningTime="2025-11-19 02:00:32.180789982 +0000 UTC m=+123.015544673" watchObservedRunningTime="2025-11-19 02:00:38.171539663 +0000 UTC m=+129.006294354"
	Nov 19 02:00:49 addons-238225 kubelet[1266]: I1119 02:00:49.120288    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2baf8cd1-8924-4fb3-82d9-700c13fe0f27-gcp-creds\") pod \"registry-test\" (UID: \"2baf8cd1-8924-4fb3-82d9-700c13fe0f27\") " pod="default/registry-test"
	Nov 19 02:00:49 addons-238225 kubelet[1266]: I1119 02:00:49.120385    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxrjq\" (UniqueName: \"kubernetes.io/projected/2baf8cd1-8924-4fb3-82d9-700c13fe0f27-kube-api-access-vxrjq\") pod \"registry-test\" (UID: \"2baf8cd1-8924-4fb3-82d9-700c13fe0f27\") " pod="default/registry-test"
	Nov 19 02:00:51 addons-238225 kubelet[1266]: I1119 02:00:51.945555    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2822ff85-be16-4469-903a-671f59bca12e-data\") pod \"helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626\" (UID: \"2822ff85-be16-4469-903a-671f59bca12e\") " pod="local-path-storage/helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626"
	Nov 19 02:00:51 addons-238225 kubelet[1266]: I1119 02:00:51.945681    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2822ff85-be16-4469-903a-671f59bca12e-gcp-creds\") pod \"helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626\" (UID: \"2822ff85-be16-4469-903a-671f59bca12e\") " pod="local-path-storage/helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626"
	Nov 19 02:00:51 addons-238225 kubelet[1266]: I1119 02:00:51.945762    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/2822ff85-be16-4469-903a-671f59bca12e-script\") pod \"helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626\" (UID: \"2822ff85-be16-4469-903a-671f59bca12e\") " pod="local-path-storage/helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626"
	Nov 19 02:00:51 addons-238225 kubelet[1266]: I1119 02:00:51.945822    1266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz96b\" (UniqueName: \"kubernetes.io/projected/2822ff85-be16-4469-903a-671f59bca12e-kube-api-access-dz96b\") pod \"helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626\" (UID: \"2822ff85-be16-4469-903a-671f59bca12e\") " pod="local-path-storage/helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626"
	Nov 19 02:00:53 addons-238225 kubelet[1266]: I1119 02:00:53.350400    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-fb27k" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 02:00:53 addons-238225 kubelet[1266]: I1119 02:00:53.457706    1266 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxrjq\" (UniqueName: \"kubernetes.io/projected/2baf8cd1-8924-4fb3-82d9-700c13fe0f27-kube-api-access-vxrjq\") pod \"2baf8cd1-8924-4fb3-82d9-700c13fe0f27\" (UID: \"2baf8cd1-8924-4fb3-82d9-700c13fe0f27\") "
	Nov 19 02:00:53 addons-238225 kubelet[1266]: I1119 02:00:53.457766    1266 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2baf8cd1-8924-4fb3-82d9-700c13fe0f27-gcp-creds\") pod \"2baf8cd1-8924-4fb3-82d9-700c13fe0f27\" (UID: \"2baf8cd1-8924-4fb3-82d9-700c13fe0f27\") "
	Nov 19 02:00:53 addons-238225 kubelet[1266]: I1119 02:00:53.458521    1266 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2baf8cd1-8924-4fb3-82d9-700c13fe0f27-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "2baf8cd1-8924-4fb3-82d9-700c13fe0f27" (UID: "2baf8cd1-8924-4fb3-82d9-700c13fe0f27"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 19 02:00:53 addons-238225 kubelet[1266]: I1119 02:00:53.460477    1266 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2baf8cd1-8924-4fb3-82d9-700c13fe0f27-kube-api-access-vxrjq" (OuterVolumeSpecName: "kube-api-access-vxrjq") pod "2baf8cd1-8924-4fb3-82d9-700c13fe0f27" (UID: "2baf8cd1-8924-4fb3-82d9-700c13fe0f27"). InnerVolumeSpecName "kube-api-access-vxrjq". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 19 02:00:53 addons-238225 kubelet[1266]: I1119 02:00:53.558425    1266 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vxrjq\" (UniqueName: \"kubernetes.io/projected/2baf8cd1-8924-4fb3-82d9-700c13fe0f27-kube-api-access-vxrjq\") on node \"addons-238225\" DevicePath \"\""
	Nov 19 02:00:53 addons-238225 kubelet[1266]: I1119 02:00:53.558469    1266 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2baf8cd1-8924-4fb3-82d9-700c13fe0f27-gcp-creds\") on node \"addons-238225\" DevicePath \"\""
	Nov 19 02:00:54 addons-238225 kubelet[1266]: I1119 02:00:54.253677    1266 scope.go:117] "RemoveContainer" containerID="6e3119ea8baa23e5cf3fc989b3eaab5f9efd385886b22f5ec1333802e59dc373"
	Nov 19 02:00:55 addons-238225 kubelet[1266]: I1119 02:00:55.354624    1266 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2baf8cd1-8924-4fb3-82d9-700c13fe0f27" path="/var/lib/kubelet/pods/2baf8cd1-8924-4fb3-82d9-700c13fe0f27/volumes"
	Nov 19 02:00:55 addons-238225 kubelet[1266]: I1119 02:00:55.355980    1266 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7m7l6" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77] <==
	W1119 02:00:36.565266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:38.568879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:38.581779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:40.585165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:40.591768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:42.594939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:42.603520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:44.607052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:44.611733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:46.615134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:46.619745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:48.622726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:48.628321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:50.632010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:50.636352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:52.640020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:52.645056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:54.648125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:54.655680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:56.660374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:56.667657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:58.670270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:00:58.674790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:01:00.678634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:01:00.683655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-238225 -n addons-238225
helpers_test.go:269: (dbg) Run:  kubectl --context addons-238225 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: test-local-path ingress-nginx-admission-create-7dtkx ingress-nginx-admission-patch-vwbkh registry-creds-764b6fb674-6dd8r helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-238225 describe pod test-local-path ingress-nginx-admission-create-7dtkx ingress-nginx-admission-patch-vwbkh registry-creds-764b6fb674-6dd8r helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-238225 describe pod test-local-path ingress-nginx-admission-create-7dtkx ingress-nginx-admission-patch-vwbkh registry-creds-764b6fb674-6dd8r helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626: exit status 1 (102.186971ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cfsr7 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-cfsr7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7dtkx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vwbkh" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-6dd8r" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-238225 describe pod test-local-path ingress-nginx-admission-create-7dtkx ingress-nginx-admission-patch-vwbkh registry-creds-764b6fb674-6dd8r helper-pod-create-pvc-62679135-f675-42e1-8d98-c37f6ea08626: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable headlamp --alsologtostderr -v=1: exit status 11 (254.476353ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:01:02.422535 1473260 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:01:02.424008 1473260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:02.424031 1473260 out.go:374] Setting ErrFile to fd 2...
	I1119 02:01:02.424037 1473260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:02.424333 1473260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:01:02.424684 1473260 mustload.go:66] Loading cluster: addons-238225
	I1119 02:01:02.425052 1473260 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:02.425069 1473260 addons.go:607] checking whether the cluster is paused
	I1119 02:01:02.425175 1473260 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:02.425184 1473260 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:01:02.425662 1473260 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:01:02.442802 1473260 ssh_runner.go:195] Run: systemctl --version
	I1119 02:01:02.442860 1473260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:01:02.460849 1473260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:01:02.560062 1473260 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:01:02.560161 1473260 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:01:02.590217 1473260 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:01:02.590249 1473260 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:01:02.590254 1473260 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:01:02.590258 1473260 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:01:02.590261 1473260 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:01:02.590265 1473260 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:01:02.590268 1473260 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:01:02.590271 1473260 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:01:02.590274 1473260 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:01:02.590281 1473260 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:01:02.590284 1473260 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:01:02.590288 1473260 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:01:02.590291 1473260 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:01:02.590294 1473260 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:01:02.590297 1473260 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:01:02.590324 1473260 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:01:02.590338 1473260 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:01:02.590343 1473260 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:01:02.590347 1473260 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:01:02.590350 1473260 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:01:02.590355 1473260 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:01:02.590358 1473260 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:01:02.590361 1473260 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:01:02.590365 1473260 cri.go:89] found id: ""
	I1119 02:01:02.590425 1473260 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:01:02.606482 1473260 out.go:203] 
	W1119 02:01:02.609495 1473260 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:01:02.609611 1473260 out.go:285] * 
	* 
	W1119 02:01:02.618735 1473260 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:01:02.621842 1473260 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-hklsv" [a8a6b30b-3856-4253-896c-3cd11780190e] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00306685s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (302.172911ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:00:58.824659 1472735 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:00:58.826090 1472735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:00:58.826107 1472735 out.go:374] Setting ErrFile to fd 2...
	I1119 02:00:58.826112 1472735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:00:58.826379 1472735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:00:58.826693 1472735 mustload.go:66] Loading cluster: addons-238225
	I1119 02:00:58.827107 1472735 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:00:58.827125 1472735 addons.go:607] checking whether the cluster is paused
	I1119 02:00:58.827257 1472735 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:00:58.827276 1472735 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:00:58.827716 1472735 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:00:58.852471 1472735 ssh_runner.go:195] Run: systemctl --version
	I1119 02:00:58.852605 1472735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:00:58.891737 1472735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:00:58.995988 1472735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:00:58.996069 1472735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:00:59.032777 1472735 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:00:59.032801 1472735 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:00:59.032807 1472735 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:00:59.032810 1472735 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:00:59.032814 1472735 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:00:59.032817 1472735 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:00:59.032820 1472735 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:00:59.032823 1472735 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:00:59.032826 1472735 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:00:59.032832 1472735 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:00:59.032835 1472735 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:00:59.032838 1472735 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:00:59.032842 1472735 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:00:59.032845 1472735 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:00:59.032848 1472735 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:00:59.032853 1472735 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:00:59.032860 1472735 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:00:59.032868 1472735 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:00:59.032871 1472735 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:00:59.032875 1472735 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:00:59.032883 1472735 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:00:59.032887 1472735 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:00:59.032890 1472735 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:00:59.032893 1472735 cri.go:89] found id: ""
	I1119 02:00:59.032943 1472735 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:00:59.051126 1472735 out.go:203] 
	W1119 02:00:59.054648 1472735 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:00:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:00:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:00:59.054675 1472735 out.go:285] * 
	* 
	W1119 02:00:59.063518 1472735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:00:59.066935 1472735 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.31s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (19.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-238225 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-238225 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [8841e4d7-b34c-43a8-b3aa-45adc960b30a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [8841e4d7-b34c-43a8-b3aa-45adc960b30a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [8841e4d7-b34c-43a8-b3aa-45adc960b30a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003256735s
addons_test.go:967: (dbg) Run:  kubectl --context addons-238225 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 ssh "cat /opt/local-path-provisioner/pvc-62679135-f675-42e1-8d98-c37f6ea08626_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-238225 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-238225 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (303.699272ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:01:10.646417 1473591 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:01:10.648006 1473591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:10.648025 1473591 out.go:374] Setting ErrFile to fd 2...
	I1119 02:01:10.648031 1473591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:01:10.648382 1473591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:01:10.648756 1473591 mustload.go:66] Loading cluster: addons-238225
	I1119 02:01:10.649234 1473591 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:10.649256 1473591 addons.go:607] checking whether the cluster is paused
	I1119 02:01:10.649368 1473591 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:01:10.649385 1473591 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:01:10.650006 1473591 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:01:10.667570 1473591 ssh_runner.go:195] Run: systemctl --version
	I1119 02:01:10.667630 1473591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:01:10.684823 1473591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:01:10.784249 1473591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:01:10.784361 1473591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:01:10.830033 1473591 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:01:10.830057 1473591 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:01:10.830063 1473591 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:01:10.830066 1473591 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:01:10.830070 1473591 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:01:10.830073 1473591 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:01:10.830077 1473591 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:01:10.830080 1473591 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:01:10.830083 1473591 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:01:10.830089 1473591 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:01:10.830092 1473591 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:01:10.830095 1473591 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:01:10.830099 1473591 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:01:10.830102 1473591 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:01:10.830105 1473591 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:01:10.830110 1473591 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:01:10.830117 1473591 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:01:10.830121 1473591 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:01:10.830124 1473591 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:01:10.830127 1473591 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:01:10.830132 1473591 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:01:10.830135 1473591 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:01:10.830139 1473591 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:01:10.830142 1473591 cri.go:89] found id: ""
	I1119 02:01:10.830202 1473591 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:01:10.850909 1473591 out.go:203] 
	W1119 02:01:10.853919 1473591 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:01:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:01:10.853949 1473591 out.go:285] * 
	* 
	W1119 02:01:10.863946 1473591 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:01:10.866883 1473591 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (19.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-fb27k" [403cf382-cc51-4315-b678-7c3168a8179a] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004014607s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (292.870061ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:00:51.254891 1472497 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:00:51.256634 1472497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:00:51.256655 1472497 out.go:374] Setting ErrFile to fd 2...
	I1119 02:00:51.256663 1472497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:00:51.256935 1472497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:00:51.257236 1472497 mustload.go:66] Loading cluster: addons-238225
	I1119 02:00:51.257638 1472497 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:00:51.257656 1472497 addons.go:607] checking whether the cluster is paused
	I1119 02:00:51.257765 1472497 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:00:51.257782 1472497 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:00:51.258237 1472497 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:00:51.278602 1472497 ssh_runner.go:195] Run: systemctl --version
	I1119 02:00:51.278659 1472497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:00:51.299124 1472497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:00:51.404079 1472497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:00:51.404184 1472497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:00:51.435207 1472497 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:00:51.435230 1472497 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:00:51.435235 1472497 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:00:51.435240 1472497 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:00:51.435243 1472497 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:00:51.435247 1472497 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:00:51.435251 1472497 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:00:51.435254 1472497 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:00:51.435257 1472497 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:00:51.435269 1472497 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:00:51.435273 1472497 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:00:51.435277 1472497 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:00:51.435280 1472497 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:00:51.435284 1472497 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:00:51.435288 1472497 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:00:51.435296 1472497 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:00:51.435306 1472497 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:00:51.435312 1472497 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:00:51.435315 1472497 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:00:51.435318 1472497 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:00:51.435322 1472497 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:00:51.435326 1472497 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:00:51.435328 1472497 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:00:51.435332 1472497 cri.go:89] found id: ""
	I1119 02:00:51.435379 1472497 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:00:51.451191 1472497 out.go:203] 
	W1119 02:00:51.454303 1472497 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:00:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:00:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:00:51.454328 1472497 out.go:285] * 
	* 
	W1119 02:00:51.463870 1472497 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:00:51.466969 1472497 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.30s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-97cnn" [5e75a731-6e7c-49bc-bca6-e4e6afa1ec11] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003224514s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-238225 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-238225 addons disable yakd --alsologtostderr -v=1: exit status 11 (360.5995ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:00:44.863897 1472395 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:00:44.865278 1472395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:00:44.865292 1472395 out.go:374] Setting ErrFile to fd 2...
	I1119 02:00:44.865299 1472395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:00:44.865585 1472395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:00:44.865888 1472395 mustload.go:66] Loading cluster: addons-238225
	I1119 02:00:44.866252 1472395 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:00:44.866270 1472395 addons.go:607] checking whether the cluster is paused
	I1119 02:00:44.866382 1472395 config.go:182] Loaded profile config "addons-238225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:00:44.866398 1472395 host.go:66] Checking if "addons-238225" exists ...
	I1119 02:00:44.866842 1472395 cli_runner.go:164] Run: docker container inspect addons-238225 --format={{.State.Status}}
	I1119 02:00:44.889358 1472395 ssh_runner.go:195] Run: systemctl --version
	I1119 02:00:44.889437 1472395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-238225
	I1119 02:00:44.908013 1472395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34614 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/addons-238225/id_rsa Username:docker}
	I1119 02:00:45.014543 1472395 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:00:45.014648 1472395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:00:45.112346 1472395 cri.go:89] found id: "9c232d33326a74acf603b22763aa1dc42d70a479e12432e50bfa9c9405eed2d8"
	I1119 02:00:45.112370 1472395 cri.go:89] found id: "772cfe62f02aaecb841e889b3cbb65d2a9b5651157073f7db00ccf3ddff4c0f1"
	I1119 02:00:45.112376 1472395 cri.go:89] found id: "21ceae69f9b817368777579d582cb590af22569048c0dbfdcc0ff812b0e66e82"
	I1119 02:00:45.112380 1472395 cri.go:89] found id: "d64b782c68c2447187c1e3efd65e0913b2455f09ccc969d04feb74abe38e660a"
	I1119 02:00:45.112383 1472395 cri.go:89] found id: "5e8f0f7f444317dfe5eacc4508981c90287704743391b71bc5ccb185d00f1f05"
	I1119 02:00:45.112388 1472395 cri.go:89] found id: "b38eaf566b86ba36188f3bd9d9c4bf78d2c17cca364dfee5652ee99d4b60a7b9"
	I1119 02:00:45.112391 1472395 cri.go:89] found id: "913d1dc20a3a2502a8c5187d02817ea7496846fae75cdd364154dcf3ba504b95"
	I1119 02:00:45.112395 1472395 cri.go:89] found id: "d4baa1f0a47d31c47f92d4737ca4bf1a74bf81781024ad8fb0bc1aab729ee9e4"
	I1119 02:00:45.112398 1472395 cri.go:89] found id: "91ecb63aa939ed937635e6c758cf1f28306bf72385e48c4d5d6e5eac9fe999f5"
	I1119 02:00:45.112412 1472395 cri.go:89] found id: "c53519ba9e004b3ff7be4f7f3cef7fab949fdcf796eaede9f39a73fd6b199e6e"
	I1119 02:00:45.112416 1472395 cri.go:89] found id: "dcca1b842fe4422e2747d4422c0f9f7b575eecab7d393d4f0995a33df7c79162"
	I1119 02:00:45.112419 1472395 cri.go:89] found id: "e4c59f62ececb825cf3a40b0802bc8b6ecb4d59770f79a84a7403b9319302101"
	I1119 02:00:45.112423 1472395 cri.go:89] found id: "be9d5b6bedfbc91bb699344892f0474b20a841254ce6fd3144408edd11bc007d"
	I1119 02:00:45.112426 1472395 cri.go:89] found id: "d79a6a486de50d2c0685228164143b81b6a22900f48f3a05491c47877066261b"
	I1119 02:00:45.112430 1472395 cri.go:89] found id: "99e3704db1eb401031a862edd15e56b4aec5c806bb339f38d76ba88c7e8fa047"
	I1119 02:00:45.112435 1472395 cri.go:89] found id: "b94070f6dc6d4ea17b3a67020e38e4caa93a1b8b83d5bb691770abfbccddba96"
	I1119 02:00:45.112440 1472395 cri.go:89] found id: "d0f307f4b6c3423d1af0ad1f8066d8df474dcdfb5ec77842739411e57b5bbc77"
	I1119 02:00:45.112444 1472395 cri.go:89] found id: "0c54dc25c8ad51cf6765dc7bc85a062001f2e7ac00a156aaa64443d92f972181"
	I1119 02:00:45.112448 1472395 cri.go:89] found id: "a841f7bd1c9314f581270e99b5249d563aa54a685fc9377709257d65d7241884"
	I1119 02:00:45.112452 1472395 cri.go:89] found id: "76ee598a60e1ecbb0846681cb536270450910784fdbfeec1b724bbc506bc7fc1"
	I1119 02:00:45.112458 1472395 cri.go:89] found id: "a757a1a6114f803952eab86dab9d7a3706e530f2d53eccfb6a046fcfea9ad3b4"
	I1119 02:00:45.112461 1472395 cri.go:89] found id: "7a77a55a81c017bef912f34dd320fb488cb213cabc9bee0e9a3126964c29252b"
	I1119 02:00:45.112464 1472395 cri.go:89] found id: "85abfad90a4c2830eebe69eeb776b9e0f018907069c8517cc51c16103c6b98c1"
	I1119 02:00:45.112468 1472395 cri.go:89] found id: ""
	I1119 02:00:45.112527 1472395 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:00:45.141141 1472395 out.go:203] 
	W1119 02:00:45.145344 1472395 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:00:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:00:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:00:45.145382 1472395 out.go:285] * 
	* 
	W1119 02:00:45.157147 1472395 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:00:45.163620 1472395 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-238225 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-132054 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-132054 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-cxpns" [538e2547-deda-48d5-b04a-0d6c91671ce1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-132054 -n functional-132054
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-19 02:17:53.100948621 +0000 UTC m=+1264.365541858
functional_test.go:1645: (dbg) Run:  kubectl --context functional-132054 describe po hello-node-connect-7d85dfc575-cxpns -n default
functional_test.go:1645: (dbg) kubectl --context functional-132054 describe po hello-node-connect-7d85dfc575-cxpns -n default:
Name:             hello-node-connect-7d85dfc575-cxpns
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-132054/192.168.49.2
Start Time:       Wed, 19 Nov 2025 02:07:52 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-txqlb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-txqlb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cxpns to functional-132054
Normal   Pulling    7m6s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m59s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m59s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-132054 logs hello-node-connect-7d85dfc575-cxpns -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-132054 logs hello-node-connect-7d85dfc575-cxpns -n default: exit status 1 (94.807391ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cxpns" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-132054 logs hello-node-connect-7d85dfc575-cxpns -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-132054 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-cxpns
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-132054/192.168.49.2
Start Time:       Wed, 19 Nov 2025 02:07:52 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-txqlb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-txqlb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cxpns to functional-132054
Normal   Pulling    7m6s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m59s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m59s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-132054 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-132054 logs -l app=hello-node-connect: exit status 1 (83.034011ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cxpns" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-132054 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-132054 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.162.225
IPs:                      10.97.162.225
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31066/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-132054
helpers_test.go:243: (dbg) docker inspect functional-132054:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1a956e729a40b2039d8b0e26a6f5d7bb3f1bcfac3f83604cfbc310444fa83b42",
	        "Created": "2025-11-19T02:05:05.266670327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1481121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:05:05.308822033Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/1a956e729a40b2039d8b0e26a6f5d7bb3f1bcfac3f83604cfbc310444fa83b42/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1a956e729a40b2039d8b0e26a6f5d7bb3f1bcfac3f83604cfbc310444fa83b42/hostname",
	        "HostsPath": "/var/lib/docker/containers/1a956e729a40b2039d8b0e26a6f5d7bb3f1bcfac3f83604cfbc310444fa83b42/hosts",
	        "LogPath": "/var/lib/docker/containers/1a956e729a40b2039d8b0e26a6f5d7bb3f1bcfac3f83604cfbc310444fa83b42/1a956e729a40b2039d8b0e26a6f5d7bb3f1bcfac3f83604cfbc310444fa83b42-json.log",
	        "Name": "/functional-132054",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-132054:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-132054",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1a956e729a40b2039d8b0e26a6f5d7bb3f1bcfac3f83604cfbc310444fa83b42",
	                "LowerDir": "/var/lib/docker/overlay2/543683cc08125aad766c5ea30404ab0702bd6a70800eacc9d57c49c445fc9634-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/543683cc08125aad766c5ea30404ab0702bd6a70800eacc9d57c49c445fc9634/merged",
	                "UpperDir": "/var/lib/docker/overlay2/543683cc08125aad766c5ea30404ab0702bd6a70800eacc9d57c49c445fc9634/diff",
	                "WorkDir": "/var/lib/docker/overlay2/543683cc08125aad766c5ea30404ab0702bd6a70800eacc9d57c49c445fc9634/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-132054",
	                "Source": "/var/lib/docker/volumes/functional-132054/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-132054",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-132054",
	                "name.minikube.sigs.k8s.io": "functional-132054",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0bc9a91e752278953c87f697ad55fbe04d3eaf2001a0808eeffe8f4bda804e83",
	            "SandboxKey": "/var/run/docker/netns/0bc9a91e7522",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34624"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34625"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34628"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34626"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34627"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-132054": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:9a:2e:c7:35:29",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "01475c25c2848eb77fa79adf600858cae23a6557cb5a7f7d0d094307c546c039",
	                    "EndpointID": "60738afc1341fbd695997dcbdcb94b0d2c244080aec9c4d0ec553c0c299aef6c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-132054",
	                        "1a956e729a40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-132054 -n functional-132054
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-132054 logs -n 25: (1.454260832s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-132054 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:06 UTC │ 19 Nov 25 02:06 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 19 Nov 25 02:06 UTC │ 19 Nov 25 02:06 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 19 Nov 25 02:06 UTC │ 19 Nov 25 02:06 UTC │
	│ kubectl │ functional-132054 kubectl -- --context functional-132054 get pods                                                          │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:06 UTC │ 19 Nov 25 02:06 UTC │
	│ start   │ -p functional-132054 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:06 UTC │ 19 Nov 25 02:07 UTC │
	│ service │ invalid-svc -p functional-132054                                                                                           │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │                     │
	│ cp      │ functional-132054 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	│ config  │ functional-132054 config unset cpus                                                                                        │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	│ config  │ functional-132054 config get cpus                                                                                          │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │                     │
	│ config  │ functional-132054 config set cpus 2                                                                                        │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	│ config  │ functional-132054 config get cpus                                                                                          │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	│ config  │ functional-132054 config unset cpus                                                                                        │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	│ ssh     │ functional-132054 ssh -n functional-132054 sudo cat /home/docker/cp-test.txt                                               │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	│ config  │ functional-132054 config get cpus                                                                                          │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │                     │
	│ ssh     │ functional-132054 ssh echo hello                                                                                           │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	│ cp      │ functional-132054 cp functional-132054:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2172885285/001/cp-test.txt │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	│ ssh     │ functional-132054 ssh cat /etc/hostname                                                                                    │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	│ ssh     │ functional-132054 ssh -n functional-132054 sudo cat /home/docker/cp-test.txt                                               │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	│ tunnel  │ functional-132054 tunnel --alsologtostderr                                                                                 │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │                     │
	│ tunnel  │ functional-132054 tunnel --alsologtostderr                                                                                 │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │                     │
	│ cp      │ functional-132054 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	│ ssh     │ functional-132054 ssh -n functional-132054 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	│ tunnel  │ functional-132054 tunnel --alsologtostderr                                                                                 │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │                     │
	│ addons  │ functional-132054 addons list                                                                                              │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	│ addons  │ functional-132054 addons list -o json                                                                                      │ functional-132054 │ jenkins │ v1.37.0 │ 19 Nov 25 02:07 UTC │ 19 Nov 25 02:07 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:06:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:06:56.111544 1485321 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:06:56.111644 1485321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:06:56.111648 1485321 out.go:374] Setting ErrFile to fd 2...
	I1119 02:06:56.111651 1485321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:06:56.111905 1485321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:06:56.112243 1485321 out.go:368] Setting JSON to false
	I1119 02:06:56.113156 1485321 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":35343,"bootTime":1763482673,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 02:06:56.113208 1485321 start.go:143] virtualization:  
	I1119 02:06:56.116686 1485321 out.go:179] * [functional-132054] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 02:06:56.120522 1485321 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:06:56.120640 1485321 notify.go:221] Checking for updates...
	I1119 02:06:56.126044 1485321 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:06:56.128839 1485321 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:06:56.131688 1485321 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 02:06:56.134509 1485321 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 02:06:56.137358 1485321 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:06:56.140690 1485321 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:06:56.140786 1485321 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:06:56.163986 1485321 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 02:06:56.164097 1485321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:06:56.227104 1485321 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-19 02:06:56.215522028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:06:56.227200 1485321 docker.go:319] overlay module found
	I1119 02:06:56.230335 1485321 out.go:179] * Using the docker driver based on existing profile
	I1119 02:06:56.233343 1485321 start.go:309] selected driver: docker
	I1119 02:06:56.233356 1485321 start.go:930] validating driver "docker" against &{Name:functional-132054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-132054 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:06:56.233443 1485321 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:06:56.233570 1485321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:06:56.285776 1485321 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-19 02:06:56.276869971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:06:56.286193 1485321 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:06:56.286217 1485321 cni.go:84] Creating CNI manager for ""
	I1119 02:06:56.286271 1485321 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:06:56.286329 1485321 start.go:353] cluster config:
	{Name:functional-132054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-132054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:06:56.289480 1485321 out.go:179] * Starting "functional-132054" primary control-plane node in "functional-132054" cluster
	I1119 02:06:56.292222 1485321 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:06:56.295147 1485321 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:06:56.298126 1485321 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:06:56.298149 1485321 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:06:56.298164 1485321 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 02:06:56.298172 1485321 cache.go:65] Caching tarball of preloaded images
	I1119 02:06:56.298251 1485321 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 02:06:56.298260 1485321 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:06:56.298395 1485321 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/config.json ...
	I1119 02:06:56.317093 1485321 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:06:56.317104 1485321 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:06:56.317123 1485321 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:06:56.317146 1485321 start.go:360] acquireMachinesLock for functional-132054: {Name:mk6f8f53d059809e8ce7ba76b25ad3b6fe7ea1e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:06:56.317213 1485321 start.go:364] duration metric: took 48.105µs to acquireMachinesLock for "functional-132054"
	I1119 02:06:56.317232 1485321 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:06:56.317236 1485321 fix.go:54] fixHost starting: 
	I1119 02:06:56.317493 1485321 cli_runner.go:164] Run: docker container inspect functional-132054 --format={{.State.Status}}
	I1119 02:06:56.333425 1485321 fix.go:112] recreateIfNeeded on functional-132054: state=Running err=<nil>
	W1119 02:06:56.333446 1485321 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 02:06:56.336820 1485321 out.go:252] * Updating the running docker "functional-132054" container ...
	I1119 02:06:56.336840 1485321 machine.go:94] provisionDockerMachine start ...
	I1119 02:06:56.336931 1485321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
	I1119 02:06:56.354020 1485321 main.go:143] libmachine: Using SSH client type: native
	I1119 02:06:56.354324 1485321 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34624 <nil> <nil>}
	I1119 02:06:56.354331 1485321 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:06:56.493100 1485321 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-132054
	
	I1119 02:06:56.493113 1485321 ubuntu.go:182] provisioning hostname "functional-132054"
	I1119 02:06:56.493183 1485321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
	I1119 02:06:56.512540 1485321 main.go:143] libmachine: Using SSH client type: native
	I1119 02:06:56.512826 1485321 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34624 <nil> <nil>}
	I1119 02:06:56.512835 1485321 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-132054 && echo "functional-132054" | sudo tee /etc/hostname
	I1119 02:06:56.666880 1485321 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-132054
	
	I1119 02:06:56.666953 1485321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
	I1119 02:06:56.685310 1485321 main.go:143] libmachine: Using SSH client type: native
	I1119 02:06:56.685747 1485321 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34624 <nil> <nil>}
	I1119 02:06:56.685766 1485321 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-132054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-132054/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-132054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:06:56.829984 1485321 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:06:56.829997 1485321 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 02:06:56.830022 1485321 ubuntu.go:190] setting up certificates
	I1119 02:06:56.830031 1485321 provision.go:84] configureAuth start
	I1119 02:06:56.830092 1485321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-132054
	I1119 02:06:56.847669 1485321 provision.go:143] copyHostCerts
	I1119 02:06:56.847732 1485321 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 02:06:56.847748 1485321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 02:06:56.847827 1485321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 02:06:56.847930 1485321 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 02:06:56.847934 1485321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 02:06:56.847962 1485321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 02:06:56.848024 1485321 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 02:06:56.848027 1485321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 02:06:56.848050 1485321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 02:06:56.848103 1485321 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.functional-132054 san=[127.0.0.1 192.168.49.2 functional-132054 localhost minikube]
	I1119 02:06:57.035197 1485321 provision.go:177] copyRemoteCerts
	I1119 02:06:57.035250 1485321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:06:57.035295 1485321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
	I1119 02:06:57.055459 1485321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/functional-132054/id_rsa Username:docker}
	I1119 02:06:57.158121 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 02:06:57.177335 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 02:06:57.194657 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 02:06:57.212871 1485321 provision.go:87] duration metric: took 382.815823ms to configureAuth
	I1119 02:06:57.212888 1485321 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:06:57.213102 1485321 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:06:57.213199 1485321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
	I1119 02:06:57.231086 1485321 main.go:143] libmachine: Using SSH client type: native
	I1119 02:06:57.231425 1485321 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34624 <nil> <nil>}
	I1119 02:06:57.231438 1485321 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:07:02.633348 1485321 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:07:02.633360 1485321 machine.go:97] duration metric: took 6.296513595s to provisionDockerMachine
	I1119 02:07:02.633369 1485321 start.go:293] postStartSetup for "functional-132054" (driver="docker")
	I1119 02:07:02.633379 1485321 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:07:02.633437 1485321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:07:02.633496 1485321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
	I1119 02:07:02.650584 1485321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/functional-132054/id_rsa Username:docker}
	I1119 02:07:02.753685 1485321 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:07:02.757030 1485321 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:07:02.757048 1485321 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:07:02.757057 1485321 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 02:07:02.757113 1485321 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 02:07:02.757194 1485321 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 02:07:02.757267 1485321 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/test/nested/copy/1465377/hosts -> hosts in /etc/test/nested/copy/1465377
	I1119 02:07:02.757313 1485321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1465377
	I1119 02:07:02.764972 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:07:02.782151 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/test/nested/copy/1465377/hosts --> /etc/test/nested/copy/1465377/hosts (40 bytes)
	I1119 02:07:02.799893 1485321 start.go:296] duration metric: took 166.509391ms for postStartSetup
	I1119 02:07:02.799973 1485321 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:07:02.800018 1485321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
	I1119 02:07:02.818628 1485321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/functional-132054/id_rsa Username:docker}
	I1119 02:07:02.918734 1485321 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:07:02.923465 1485321 fix.go:56] duration metric: took 6.606221666s for fixHost
	I1119 02:07:02.923480 1485321 start.go:83] releasing machines lock for "functional-132054", held for 6.606259761s
	I1119 02:07:02.923558 1485321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-132054
	I1119 02:07:02.943483 1485321 ssh_runner.go:195] Run: cat /version.json
	I1119 02:07:02.943538 1485321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
	I1119 02:07:02.943805 1485321 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:07:02.943860 1485321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
	I1119 02:07:02.968531 1485321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/functional-132054/id_rsa Username:docker}
	I1119 02:07:02.994429 1485321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/functional-132054/id_rsa Username:docker}
	I1119 02:07:03.085650 1485321 ssh_runner.go:195] Run: systemctl --version
	I1119 02:07:03.193868 1485321 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:07:03.230855 1485321 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:07:03.235352 1485321 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:07:03.235409 1485321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:07:03.243005 1485321 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:07:03.243018 1485321 start.go:496] detecting cgroup driver to use...
	I1119 02:07:03.243048 1485321 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 02:07:03.243091 1485321 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:07:03.258030 1485321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:07:03.270671 1485321 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:07:03.270724 1485321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:07:03.285952 1485321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:07:03.298761 1485321 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:07:03.428941 1485321 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:07:03.558272 1485321 docker.go:234] disabling docker service ...
	I1119 02:07:03.558328 1485321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:07:03.573577 1485321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:07:03.586599 1485321 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:07:03.720842 1485321 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:07:03.852234 1485321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:07:03.865079 1485321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:07:03.878667 1485321 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:07:03.878718 1485321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:07:03.887237 1485321 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 02:07:03.887289 1485321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:07:03.896411 1485321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:07:03.904467 1485321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:07:03.913204 1485321 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:07:03.921921 1485321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:07:03.930763 1485321 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:07:03.938644 1485321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:07:03.947145 1485321 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:07:03.954509 1485321 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:07:03.961559 1485321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:07:04.093156 1485321 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:07:08.827398 1485321 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.734212071s)
	I1119 02:07:08.827416 1485321 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:07:08.827472 1485321 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:07:08.835859 1485321 start.go:564] Will wait 60s for crictl version
	I1119 02:07:08.835927 1485321 ssh_runner.go:195] Run: which crictl
	I1119 02:07:08.839424 1485321 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:07:08.867633 1485321 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:07:08.867708 1485321 ssh_runner.go:195] Run: crio --version
	I1119 02:07:08.895274 1485321 ssh_runner.go:195] Run: crio --version
	I1119 02:07:08.931420 1485321 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:07:08.934631 1485321 cli_runner.go:164] Run: docker network inspect functional-132054 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:07:08.952184 1485321 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1119 02:07:08.959566 1485321 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1119 02:07:08.962583 1485321 kubeadm.go:884] updating cluster {Name:functional-132054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-132054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:07:08.962718 1485321 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:07:08.962783 1485321 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:07:09.004524 1485321 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:07:09.004538 1485321 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:07:09.004616 1485321 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:07:09.031506 1485321 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:07:09.031518 1485321 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:07:09.031523 1485321 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1119 02:07:09.031631 1485321 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-132054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-132054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:07:09.031712 1485321 ssh_runner.go:195] Run: crio config
	I1119 02:07:09.099797 1485321 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1119 02:07:09.099816 1485321 cni.go:84] Creating CNI manager for ""
	I1119 02:07:09.099829 1485321 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:07:09.099843 1485321 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:07:09.099869 1485321 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-132054 NodeName:functional-132054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:07:09.099985 1485321 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-132054"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:07:09.100056 1485321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:07:09.107665 1485321 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:07:09.107750 1485321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:07:09.115005 1485321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 02:07:09.127434 1485321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:07:09.139363 1485321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1119 02:07:09.152242 1485321 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:07:09.155678 1485321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:07:09.283897 1485321 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:07:09.297710 1485321 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054 for IP: 192.168.49.2
	I1119 02:07:09.297721 1485321 certs.go:195] generating shared ca certs ...
	I1119 02:07:09.297735 1485321 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:07:09.297869 1485321 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 02:07:09.297903 1485321 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 02:07:09.297908 1485321 certs.go:257] generating profile certs ...
	I1119 02:07:09.297988 1485321 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.key
	I1119 02:07:09.298044 1485321 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/apiserver.key.22120605
	I1119 02:07:09.298083 1485321 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/proxy-client.key
	I1119 02:07:09.298186 1485321 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 02:07:09.298212 1485321 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 02:07:09.298219 1485321 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 02:07:09.298251 1485321 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 02:07:09.298270 1485321 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:07:09.298290 1485321 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 02:07:09.298331 1485321 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:07:09.298919 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:07:09.316620 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:07:09.334626 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:07:09.352214 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:07:09.369051 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 02:07:09.387797 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:07:09.404752 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:07:09.421621 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:07:09.438437 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:07:09.455499 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 02:07:09.471340 1485321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 02:07:09.487519 1485321 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:07:09.499910 1485321 ssh_runner.go:195] Run: openssl version
	I1119 02:07:09.506338 1485321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 02:07:09.514346 1485321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 02:07:09.517764 1485321 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 02:07:09.517816 1485321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 02:07:09.558403 1485321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 02:07:09.566030 1485321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 02:07:09.573817 1485321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 02:07:09.577322 1485321 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 02:07:09.577381 1485321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 02:07:09.618597 1485321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:07:09.626372 1485321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:07:09.633979 1485321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:07:09.637432 1485321 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:07:09.637493 1485321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:07:09.678201 1485321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:07:09.685866 1485321 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:07:09.689285 1485321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:07:09.729694 1485321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:07:09.770690 1485321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:07:09.811430 1485321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:07:09.852159 1485321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:07:09.893540 1485321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:07:09.935551 1485321 kubeadm.go:401] StartCluster: {Name:functional-132054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-132054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:07:09.935630 1485321 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:07:09.935708 1485321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:07:09.963568 1485321 cri.go:89] found id: "f94ac9106429b0c1a2d38089419b6f37093071e73fa7251e3a87330ad567afde"
	I1119 02:07:09.963580 1485321 cri.go:89] found id: "32faccedcb429773cd0e4ed54b9eeb1cb257a048513aeecf9d2f6ebf0e96a25d"
	I1119 02:07:09.963584 1485321 cri.go:89] found id: "056b59eaa977cafca35065788dacbbe964da008c23d67f22f31598db31c17a79"
	I1119 02:07:09.963587 1485321 cri.go:89] found id: "e2aaa967205a520722c6a7b7c81012e6d55ca4b453f2bcc1a41d61de5e677108"
	I1119 02:07:09.963589 1485321 cri.go:89] found id: "25ce384edb28a8c629c27bd18cbcb95d507b1ec7d7963186fbb1783589b9a70d"
	I1119 02:07:09.963592 1485321 cri.go:89] found id: "36b1780b6acbbfe4ba7c50fe3c2aa893f2bad79ab6b72cd97af1040ed42a3195"
	I1119 02:07:09.963595 1485321 cri.go:89] found id: "d1f689ae4813d3eb55cd1232eba7fd9e818f85c7790a1e7bee8632544e0dcbd8"
	I1119 02:07:09.963597 1485321 cri.go:89] found id: "a7c6997716a68f45858ff12c4cc0889ab8384c5761ca9d157ef17ec5f50a67bc"
	I1119 02:07:09.963599 1485321 cri.go:89] found id: ""
	I1119 02:07:09.963649 1485321 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 02:07:09.974290 1485321 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:07:09Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:07:09.974363 1485321 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:07:09.981699 1485321 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:07:09.981708 1485321 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:07:09.981769 1485321 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:07:09.988644 1485321 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:07:09.989141 1485321 kubeconfig.go:125] found "functional-132054" server: "https://192.168.49.2:8441"
	I1119 02:07:09.990452 1485321 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:07:09.997863 1485321 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-19 02:05:12.209528289 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-19 02:07:09.145790922 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1119 02:07:09.997871 1485321 kubeadm.go:1161] stopping kube-system containers ...
	I1119 02:07:09.997881 1485321 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1119 02:07:09.997933 1485321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:07:10.030402 1485321 cri.go:89] found id: "f94ac9106429b0c1a2d38089419b6f37093071e73fa7251e3a87330ad567afde"
	I1119 02:07:10.030414 1485321 cri.go:89] found id: "32faccedcb429773cd0e4ed54b9eeb1cb257a048513aeecf9d2f6ebf0e96a25d"
	I1119 02:07:10.030417 1485321 cri.go:89] found id: "056b59eaa977cafca35065788dacbbe964da008c23d67f22f31598db31c17a79"
	I1119 02:07:10.030419 1485321 cri.go:89] found id: "e2aaa967205a520722c6a7b7c81012e6d55ca4b453f2bcc1a41d61de5e677108"
	I1119 02:07:10.030422 1485321 cri.go:89] found id: "25ce384edb28a8c629c27bd18cbcb95d507b1ec7d7963186fbb1783589b9a70d"
	I1119 02:07:10.030424 1485321 cri.go:89] found id: "36b1780b6acbbfe4ba7c50fe3c2aa893f2bad79ab6b72cd97af1040ed42a3195"
	I1119 02:07:10.030427 1485321 cri.go:89] found id: "d1f689ae4813d3eb55cd1232eba7fd9e818f85c7790a1e7bee8632544e0dcbd8"
	I1119 02:07:10.030429 1485321 cri.go:89] found id: "a7c6997716a68f45858ff12c4cc0889ab8384c5761ca9d157ef17ec5f50a67bc"
	I1119 02:07:10.030431 1485321 cri.go:89] found id: ""
	I1119 02:07:10.030435 1485321 cri.go:252] Stopping containers: [f94ac9106429b0c1a2d38089419b6f37093071e73fa7251e3a87330ad567afde 32faccedcb429773cd0e4ed54b9eeb1cb257a048513aeecf9d2f6ebf0e96a25d 056b59eaa977cafca35065788dacbbe964da008c23d67f22f31598db31c17a79 e2aaa967205a520722c6a7b7c81012e6d55ca4b453f2bcc1a41d61de5e677108 25ce384edb28a8c629c27bd18cbcb95d507b1ec7d7963186fbb1783589b9a70d 36b1780b6acbbfe4ba7c50fe3c2aa893f2bad79ab6b72cd97af1040ed42a3195 d1f689ae4813d3eb55cd1232eba7fd9e818f85c7790a1e7bee8632544e0dcbd8 a7c6997716a68f45858ff12c4cc0889ab8384c5761ca9d157ef17ec5f50a67bc]
	I1119 02:07:10.030493 1485321 ssh_runner.go:195] Run: which crictl
	I1119 02:07:10.034473 1485321 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 f94ac9106429b0c1a2d38089419b6f37093071e73fa7251e3a87330ad567afde 32faccedcb429773cd0e4ed54b9eeb1cb257a048513aeecf9d2f6ebf0e96a25d 056b59eaa977cafca35065788dacbbe964da008c23d67f22f31598db31c17a79 e2aaa967205a520722c6a7b7c81012e6d55ca4b453f2bcc1a41d61de5e677108 25ce384edb28a8c629c27bd18cbcb95d507b1ec7d7963186fbb1783589b9a70d 36b1780b6acbbfe4ba7c50fe3c2aa893f2bad79ab6b72cd97af1040ed42a3195 d1f689ae4813d3eb55cd1232eba7fd9e818f85c7790a1e7bee8632544e0dcbd8 a7c6997716a68f45858ff12c4cc0889ab8384c5761ca9d157ef17ec5f50a67bc
	I1119 02:07:10.098628 1485321 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1119 02:07:10.220622 1485321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:07:10.228681 1485321 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Nov 19 02:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Nov 19 02:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 19 02:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov 19 02:05 /etc/kubernetes/scheduler.conf
	
	I1119 02:07:10.228747 1485321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1119 02:07:10.236785 1485321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1119 02:07:10.244405 1485321 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:07:10.244460 1485321 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:07:10.251841 1485321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1119 02:07:10.259406 1485321 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:07:10.259458 1485321 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:07:10.266873 1485321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1119 02:07:10.274390 1485321 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:07:10.274452 1485321 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:07:10.281322 1485321 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:07:10.288613 1485321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1119 02:07:10.334653 1485321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1119 02:07:14.171648 1485321 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.836965435s)
	I1119 02:07:14.171708 1485321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1119 02:07:14.390180 1485321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1119 02:07:14.443090 1485321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1119 02:07:14.528173 1485321 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:07:14.528255 1485321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:07:15.028902 1485321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:07:15.528956 1485321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:07:15.547340 1485321 api_server.go:72] duration metric: took 1.019188543s to wait for apiserver process to appear ...
	I1119 02:07:15.547355 1485321 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:07:15.547372 1485321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1119 02:07:18.657588 1485321 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 02:07:18.657607 1485321 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 02:07:18.657619 1485321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1119 02:07:18.766248 1485321 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 02:07:18.766265 1485321 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 02:07:19.047624 1485321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1119 02:07:19.066579 1485321 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:07:19.066598 1485321 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:07:19.547728 1485321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1119 02:07:19.569821 1485321 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:07:19.569850 1485321 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:07:20.048318 1485321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1119 02:07:20.073125 1485321 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:07:20.073144 1485321 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:07:20.547437 1485321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1119 02:07:20.558511 1485321 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1119 02:07:20.578006 1485321 api_server.go:141] control plane version: v1.34.1
	I1119 02:07:20.578023 1485321 api_server.go:131] duration metric: took 5.030662784s to wait for apiserver health ...
	I1119 02:07:20.578030 1485321 cni.go:84] Creating CNI manager for ""
	I1119 02:07:20.578035 1485321 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:07:20.581867 1485321 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:07:20.585034 1485321 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:07:20.594393 1485321 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 02:07:20.594419 1485321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:07:20.615137 1485321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:07:21.136248 1485321 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:07:21.139460 1485321 system_pods.go:59] 8 kube-system pods found
	I1119 02:07:21.139484 1485321 system_pods.go:61] "coredns-66bc5c9577-7f8fz" [7e801282-bd19-4a85-a499-e7294ca135cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:07:21.139491 1485321 system_pods.go:61] "etcd-functional-132054" [5ec5b151-3f1e-451f-b89e-4fcdbf977706] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:07:21.139496 1485321 system_pods.go:61] "kindnet-zqqjn" [973c570e-e2b3-42a9-98ac-71fad498215e] Running
	I1119 02:07:21.139502 1485321 system_pods.go:61] "kube-apiserver-functional-132054" [37bc066d-289c-48fc-96f9-4687ecc08e21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:07:21.139509 1485321 system_pods.go:61] "kube-controller-manager-functional-132054" [24b9600d-b838-4fa6-82c3-b460c9efe7dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:07:21.139513 1485321 system_pods.go:61] "kube-proxy-s5v8d" [4edb09a6-0b6c-4e41-9974-37875ce1837c] Running
	I1119 02:07:21.139519 1485321 system_pods.go:61] "kube-scheduler-functional-132054" [a97ead8b-68f8-4739-94e6-a28dfe137070] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:07:21.139524 1485321 system_pods.go:61] "storage-provisioner" [ae0bd49e-56ba-4120-8e99-fc5f7304b945] Running
	I1119 02:07:21.139530 1485321 system_pods.go:74] duration metric: took 3.271215ms to wait for pod list to return data ...
	I1119 02:07:21.139536 1485321 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:07:21.142152 1485321 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 02:07:21.142170 1485321 node_conditions.go:123] node cpu capacity is 2
	I1119 02:07:21.142181 1485321 node_conditions.go:105] duration metric: took 2.640835ms to run NodePressure ...
	I1119 02:07:21.142238 1485321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1119 02:07:21.393866 1485321 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1119 02:07:21.397671 1485321 kubeadm.go:744] kubelet initialised
	I1119 02:07:21.397682 1485321 kubeadm.go:745] duration metric: took 3.80372ms waiting for restarted kubelet to initialise ...
	I1119 02:07:21.397697 1485321 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:07:21.408769 1485321 ops.go:34] apiserver oom_adj: -16
	I1119 02:07:21.408780 1485321 kubeadm.go:602] duration metric: took 11.427066848s to restartPrimaryControlPlane
	I1119 02:07:21.408787 1485321 kubeadm.go:403] duration metric: took 11.473245446s to StartCluster
	I1119 02:07:21.408811 1485321 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:07:21.408890 1485321 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:07:21.409613 1485321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:07:21.409837 1485321 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:07:21.410080 1485321 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:07:21.410119 1485321 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:07:21.410188 1485321 addons.go:70] Setting storage-provisioner=true in profile "functional-132054"
	I1119 02:07:21.410201 1485321 addons.go:239] Setting addon storage-provisioner=true in "functional-132054"
	W1119 02:07:21.410206 1485321 addons.go:248] addon storage-provisioner should already be in state true
	I1119 02:07:21.410230 1485321 host.go:66] Checking if "functional-132054" exists ...
	I1119 02:07:21.410698 1485321 cli_runner.go:164] Run: docker container inspect functional-132054 --format={{.State.Status}}
	I1119 02:07:21.411150 1485321 addons.go:70] Setting default-storageclass=true in profile "functional-132054"
	I1119 02:07:21.411164 1485321 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-132054"
	I1119 02:07:21.411429 1485321 cli_runner.go:164] Run: docker container inspect functional-132054 --format={{.State.Status}}
	I1119 02:07:21.414057 1485321 out.go:179] * Verifying Kubernetes components...
	I1119 02:07:21.417028 1485321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:07:21.443689 1485321 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:07:21.444413 1485321 addons.go:239] Setting addon default-storageclass=true in "functional-132054"
	W1119 02:07:21.444423 1485321 addons.go:248] addon default-storageclass should already be in state true
	I1119 02:07:21.444444 1485321 host.go:66] Checking if "functional-132054" exists ...
	I1119 02:07:21.444989 1485321 cli_runner.go:164] Run: docker container inspect functional-132054 --format={{.State.Status}}
	I1119 02:07:21.446577 1485321 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:07:21.446588 1485321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:07:21.446654 1485321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
	I1119 02:07:21.471211 1485321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/functional-132054/id_rsa Username:docker}
	I1119 02:07:21.485778 1485321 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:07:21.485790 1485321 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:07:21.485865 1485321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
	I1119 02:07:21.532219 1485321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/functional-132054/id_rsa Username:docker}
	I1119 02:07:21.640383 1485321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:07:21.657635 1485321 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:07:21.677005 1485321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:07:22.558448 1485321 node_ready.go:35] waiting up to 6m0s for node "functional-132054" to be "Ready" ...
	I1119 02:07:22.561648 1485321 node_ready.go:49] node "functional-132054" is "Ready"
	I1119 02:07:22.561663 1485321 node_ready.go:38] duration metric: took 3.197946ms for node "functional-132054" to be "Ready" ...
	I1119 02:07:22.561673 1485321 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:07:22.561731 1485321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:07:22.569153 1485321 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:07:22.572020 1485321 addons.go:515] duration metric: took 1.16186906s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:07:22.574478 1485321 api_server.go:72] duration metric: took 1.164605203s to wait for apiserver process to appear ...
	I1119 02:07:22.574488 1485321 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:07:22.574505 1485321 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1119 02:07:22.583560 1485321 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1119 02:07:22.584523 1485321 api_server.go:141] control plane version: v1.34.1
	I1119 02:07:22.584534 1485321 api_server.go:131] duration metric: took 10.040666ms to wait for apiserver health ...
	I1119 02:07:22.584541 1485321 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:07:22.587867 1485321 system_pods.go:59] 8 kube-system pods found
	I1119 02:07:22.587883 1485321 system_pods.go:61] "coredns-66bc5c9577-7f8fz" [7e801282-bd19-4a85-a499-e7294ca135cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:07:22.587895 1485321 system_pods.go:61] "etcd-functional-132054" [5ec5b151-3f1e-451f-b89e-4fcdbf977706] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:07:22.587900 1485321 system_pods.go:61] "kindnet-zqqjn" [973c570e-e2b3-42a9-98ac-71fad498215e] Running
	I1119 02:07:22.587906 1485321 system_pods.go:61] "kube-apiserver-functional-132054" [37bc066d-289c-48fc-96f9-4687ecc08e21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:07:22.587911 1485321 system_pods.go:61] "kube-controller-manager-functional-132054" [24b9600d-b838-4fa6-82c3-b460c9efe7dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:07:22.587915 1485321 system_pods.go:61] "kube-proxy-s5v8d" [4edb09a6-0b6c-4e41-9974-37875ce1837c] Running
	I1119 02:07:22.587921 1485321 system_pods.go:61] "kube-scheduler-functional-132054" [a97ead8b-68f8-4739-94e6-a28dfe137070] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:07:22.587924 1485321 system_pods.go:61] "storage-provisioner" [ae0bd49e-56ba-4120-8e99-fc5f7304b945] Running
	I1119 02:07:22.587929 1485321 system_pods.go:74] duration metric: took 3.38427ms to wait for pod list to return data ...
	I1119 02:07:22.587934 1485321 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:07:22.590394 1485321 default_sa.go:45] found service account: "default"
	I1119 02:07:22.590404 1485321 default_sa.go:55] duration metric: took 2.46654ms for default service account to be created ...
	I1119 02:07:22.590411 1485321 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:07:22.593227 1485321 system_pods.go:86] 8 kube-system pods found
	I1119 02:07:22.593245 1485321 system_pods.go:89] "coredns-66bc5c9577-7f8fz" [7e801282-bd19-4a85-a499-e7294ca135cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:07:22.593254 1485321 system_pods.go:89] "etcd-functional-132054" [5ec5b151-3f1e-451f-b89e-4fcdbf977706] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:07:22.593260 1485321 system_pods.go:89] "kindnet-zqqjn" [973c570e-e2b3-42a9-98ac-71fad498215e] Running
	I1119 02:07:22.593267 1485321 system_pods.go:89] "kube-apiserver-functional-132054" [37bc066d-289c-48fc-96f9-4687ecc08e21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:07:22.593285 1485321 system_pods.go:89] "kube-controller-manager-functional-132054" [24b9600d-b838-4fa6-82c3-b460c9efe7dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:07:22.593291 1485321 system_pods.go:89] "kube-proxy-s5v8d" [4edb09a6-0b6c-4e41-9974-37875ce1837c] Running
	I1119 02:07:22.593298 1485321 system_pods.go:89] "kube-scheduler-functional-132054" [a97ead8b-68f8-4739-94e6-a28dfe137070] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:07:22.593301 1485321 system_pods.go:89] "storage-provisioner" [ae0bd49e-56ba-4120-8e99-fc5f7304b945] Running
	I1119 02:07:22.593307 1485321 system_pods.go:126] duration metric: took 2.892003ms to wait for k8s-apps to be running ...
	I1119 02:07:22.593313 1485321 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:07:22.593369 1485321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:07:22.606407 1485321 system_svc.go:56] duration metric: took 13.084681ms WaitForService to wait for kubelet
	I1119 02:07:22.606424 1485321 kubeadm.go:587] duration metric: took 1.196554952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:07:22.606440 1485321 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:07:22.609278 1485321 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 02:07:22.609291 1485321 node_conditions.go:123] node cpu capacity is 2
	I1119 02:07:22.609301 1485321 node_conditions.go:105] duration metric: took 2.856935ms to run NodePressure ...
	I1119 02:07:22.609311 1485321 start.go:242] waiting for startup goroutines ...
	I1119 02:07:22.609318 1485321 start.go:247] waiting for cluster config update ...
	I1119 02:07:22.609328 1485321 start.go:256] writing updated cluster config ...
	I1119 02:07:22.609698 1485321 ssh_runner.go:195] Run: rm -f paused
	I1119 02:07:22.612887 1485321 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:07:22.616174 1485321 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7f8fz" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:07:24.621719 1485321 pod_ready.go:104] pod "coredns-66bc5c9577-7f8fz" is not "Ready", error: <nil>
	W1119 02:07:26.622103 1485321 pod_ready.go:104] pod "coredns-66bc5c9577-7f8fz" is not "Ready", error: <nil>
	I1119 02:07:28.621352 1485321 pod_ready.go:94] pod "coredns-66bc5c9577-7f8fz" is "Ready"
	I1119 02:07:28.621366 1485321 pod_ready.go:86] duration metric: took 6.005181449s for pod "coredns-66bc5c9577-7f8fz" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:07:28.624099 1485321 pod_ready.go:83] waiting for pod "etcd-functional-132054" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:07:30.629034 1485321 pod_ready.go:104] pod "etcd-functional-132054" is not "Ready", error: <nil>
	I1119 02:07:32.129386 1485321 pod_ready.go:94] pod "etcd-functional-132054" is "Ready"
	I1119 02:07:32.129400 1485321 pod_ready.go:86] duration metric: took 3.505288279s for pod "etcd-functional-132054" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:07:32.131720 1485321 pod_ready.go:83] waiting for pod "kube-apiserver-functional-132054" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:07:32.136579 1485321 pod_ready.go:94] pod "kube-apiserver-functional-132054" is "Ready"
	I1119 02:07:32.136592 1485321 pod_ready.go:86] duration metric: took 4.859941ms for pod "kube-apiserver-functional-132054" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:07:32.138864 1485321 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-132054" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:07:33.144281 1485321 pod_ready.go:94] pod "kube-controller-manager-functional-132054" is "Ready"
	I1119 02:07:33.144296 1485321 pod_ready.go:86] duration metric: took 1.00542068s for pod "kube-controller-manager-functional-132054" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:07:33.146763 1485321 pod_ready.go:83] waiting for pod "kube-proxy-s5v8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:07:33.151536 1485321 pod_ready.go:94] pod "kube-proxy-s5v8d" is "Ready"
	I1119 02:07:33.151550 1485321 pod_ready.go:86] duration metric: took 4.774577ms for pod "kube-proxy-s5v8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:07:33.327403 1485321 pod_ready.go:83] waiting for pod "kube-scheduler-functional-132054" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:07:33.727406 1485321 pod_ready.go:94] pod "kube-scheduler-functional-132054" is "Ready"
	I1119 02:07:33.727421 1485321 pod_ready.go:86] duration metric: took 400.004842ms for pod "kube-scheduler-functional-132054" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:07:33.727431 1485321 pod_ready.go:40] duration metric: took 11.114520322s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:07:33.789744 1485321 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 02:07:33.793053 1485321 out.go:179] * Done! kubectl is now configured to use "functional-132054" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 02:08:09 functional-132054 crio[3572]: time="2025-11-19T02:08:09.596228662Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-5sszn Namespace:default ID:d0111b030b27cf9a3b418c77cb9a4822f87e86d1acc3441b314a48639313174f UID:69935075-1e20-4968-bfa5-a8d86b5091b3 NetNS:/var/run/netns/5378e671-52f0-40e4-8549-11fc4e05523b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000130e20}] Aliases:map[]}"
	Nov 19 02:08:09 functional-132054 crio[3572]: time="2025-11-19T02:08:09.59637374Z" level=info msg="Checking pod default_hello-node-75c85bcc94-5sszn for CNI network kindnet (type=ptp)"
	Nov 19 02:08:09 functional-132054 crio[3572]: time="2025-11-19T02:08:09.599771492Z" level=info msg="Ran pod sandbox d0111b030b27cf9a3b418c77cb9a4822f87e86d1acc3441b314a48639313174f with infra container: default/hello-node-75c85bcc94-5sszn/POD" id=d13dd6bd-b26a-4759-8d91-d0581b932f48 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:08:09 functional-132054 crio[3572]: time="2025-11-19T02:08:09.603429535Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ffc3e24b-f3ce-4450-8308-00041c8dc9ea name=/runtime.v1.ImageService/PullImage
	Nov 19 02:08:14 functional-132054 crio[3572]: time="2025-11-19T02:08:14.599720935Z" level=info msg="Stopping pod sandbox: 79a377dc89d76f14667454bab15ca1fce156e10667135b62ceaa125ac390c7f4" id=ced558ef-108c-4b4b-8e33-e647bdfd0faa name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 02:08:14 functional-132054 crio[3572]: time="2025-11-19T02:08:14.599777425Z" level=info msg="Stopped pod sandbox (already stopped): 79a377dc89d76f14667454bab15ca1fce156e10667135b62ceaa125ac390c7f4" id=ced558ef-108c-4b4b-8e33-e647bdfd0faa name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 02:08:14 functional-132054 crio[3572]: time="2025-11-19T02:08:14.600169241Z" level=info msg="Removing pod sandbox: 79a377dc89d76f14667454bab15ca1fce156e10667135b62ceaa125ac390c7f4" id=62939829-1d7f-4700-946e-14932ff466a7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 02:08:14 functional-132054 crio[3572]: time="2025-11-19T02:08:14.604013009Z" level=info msg="Removed pod sandbox: 79a377dc89d76f14667454bab15ca1fce156e10667135b62ceaa125ac390c7f4" id=62939829-1d7f-4700-946e-14932ff466a7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 02:08:14 functional-132054 crio[3572]: time="2025-11-19T02:08:14.60447208Z" level=info msg="Stopping pod sandbox: fe50cc05a3ef35180de50cb2b468158421a5b765f4f33e8662af9a91cddc1194" id=fa613cd8-7a89-4a67-b368-da6dc79c1c47 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 02:08:14 functional-132054 crio[3572]: time="2025-11-19T02:08:14.604515738Z" level=info msg="Stopped pod sandbox (already stopped): fe50cc05a3ef35180de50cb2b468158421a5b765f4f33e8662af9a91cddc1194" id=fa613cd8-7a89-4a67-b368-da6dc79c1c47 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 02:08:14 functional-132054 crio[3572]: time="2025-11-19T02:08:14.604883702Z" level=info msg="Removing pod sandbox: fe50cc05a3ef35180de50cb2b468158421a5b765f4f33e8662af9a91cddc1194" id=21963208-25d1-4a68-b77a-95022dcf2e54 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 02:08:14 functional-132054 crio[3572]: time="2025-11-19T02:08:14.608758862Z" level=info msg="Removed pod sandbox: fe50cc05a3ef35180de50cb2b468158421a5b765f4f33e8662af9a91cddc1194" id=21963208-25d1-4a68-b77a-95022dcf2e54 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 02:08:14 functional-132054 crio[3572]: time="2025-11-19T02:08:14.609174775Z" level=info msg="Stopping pod sandbox: 76d662f6475b11f444e30344470b285d0d50fadfb046e3857b4b7584fb99e8bc" id=d5d1cf88-4225-4fec-9137-fd9b273def6d name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 02:08:14 functional-132054 crio[3572]: time="2025-11-19T02:08:14.609219721Z" level=info msg="Stopped pod sandbox (already stopped): 76d662f6475b11f444e30344470b285d0d50fadfb046e3857b4b7584fb99e8bc" id=d5d1cf88-4225-4fec-9137-fd9b273def6d name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 02:08:14 functional-132054 crio[3572]: time="2025-11-19T02:08:14.609668626Z" level=info msg="Removing pod sandbox: 76d662f6475b11f444e30344470b285d0d50fadfb046e3857b4b7584fb99e8bc" id=7d788c6c-d546-45a6-8028-1eb55ac0d3cb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 02:08:14 functional-132054 crio[3572]: time="2025-11-19T02:08:14.613086192Z" level=info msg="Removed pod sandbox: 76d662f6475b11f444e30344470b285d0d50fadfb046e3857b4b7584fb99e8bc" id=7d788c6c-d546-45a6-8028-1eb55ac0d3cb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 02:08:24 functional-132054 crio[3572]: time="2025-11-19T02:08:24.554970445Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4c1bf999-7ccc-4bbe-a026-e4a0ad6615c4 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:08:31 functional-132054 crio[3572]: time="2025-11-19T02:08:31.554286488Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0cc9ef4c-6df2-4f3d-9a9f-9566b3a8f70b name=/runtime.v1.ImageService/PullImage
	Nov 19 02:08:51 functional-132054 crio[3572]: time="2025-11-19T02:08:51.55413298Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=38f320cc-611f-496f-8bc3-a76a35f31777 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:09:17 functional-132054 crio[3572]: time="2025-11-19T02:09:17.554410331Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=dc887d1d-db7d-4c22-9f73-e1cc99f64a51 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:09:32 functional-132054 crio[3572]: time="2025-11-19T02:09:32.554984148Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1a34a972-0f6a-402c-a3c7-0bf1d9f192e9 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:10:47 functional-132054 crio[3572]: time="2025-11-19T02:10:47.554041659Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=40af66fe-5b76-4530-bbd9-18f700af4eba name=/runtime.v1.ImageService/PullImage
	Nov 19 02:11:07 functional-132054 crio[3572]: time="2025-11-19T02:11:07.554645822Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e6e959b7-e835-4807-93f9-91a1ceaf7273 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:13:29 functional-132054 crio[3572]: time="2025-11-19T02:13:29.554322919Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=79a11bb0-50c3-464f-8d47-b11190ed1b30 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:13:59 functional-132054 crio[3572]: time="2025-11-19T02:13:59.554103066Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fe43c4df-52fd-4c23-b846-a0934d3799e4 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5048956bcc172       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712   9 minutes ago       Running             myfrontend                0                   c54d5503c1feb       sp-pod                                      default
	7e6305ebea0e0       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   72eb7a0d3c50b       nginx-svc                                   default
	2116eec18b433       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   8eb1e95e18a9e       coredns-66bc5c9577-7f8fz                    kube-system
	caecc76e0aff7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   16ddfd8420280       kube-proxy-s5v8d                            kube-system
	f8fe9a43537e5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   210834cf1f6a0       kindnet-zqqjn                               kube-system
	4a6d8e297509f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   19ca4d0a543f4       storage-provisioner                         kube-system
	6c7566acffdd9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   edca01ab4e452       kube-apiserver-functional-132054            kube-system
	1d82fe177e17e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   94682bc1b15b4       kube-controller-manager-functional-132054   kube-system
	9fb3667e83e99       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   19e5a1365e06d       kube-scheduler-functional-132054            kube-system
	7351d78fb8a06       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   2ba1f025c9414       etcd-functional-132054                      kube-system
	f94ac9106429b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   19ca4d0a543f4       storage-provisioner                         kube-system
	32faccedcb429       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   8eb1e95e18a9e       coredns-66bc5c9577-7f8fz                    kube-system
	056b59eaa977c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   16ddfd8420280       kube-proxy-s5v8d                            kube-system
	e2aaa967205a5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   210834cf1f6a0       kindnet-zqqjn                               kube-system
	25ce384edb28a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   19e5a1365e06d       kube-scheduler-functional-132054            kube-system
	36b1780b6acbb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   94682bc1b15b4       kube-controller-manager-functional-132054   kube-system
	d1f689ae4813d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   2ba1f025c9414       etcd-functional-132054                      kube-system
	
	
	==> coredns [2116eec18b4339f778d7712655db0e21aedb845c6dcf217e574fd0c0ddecd008] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52871 - 18612 "HINFO IN 5761531693136320813.7165952777546370044. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012332918s
	
	
	==> coredns [32faccedcb429773cd0e4ed54b9eeb1cb257a048513aeecf9d2f6ebf0e96a25d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60611 - 26140 "HINFO IN 7132379698994736007.753690953672722871. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.038068359s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-132054
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-132054
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=functional-132054
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_05_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:05:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-132054
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:17:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:16:39 +0000   Wed, 19 Nov 2025 02:05:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:16:39 +0000   Wed, 19 Nov 2025 02:05:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:16:39 +0000   Wed, 19 Nov 2025 02:05:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:16:39 +0000   Wed, 19 Nov 2025 02:06:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-132054
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                81001a6e-3c58-46b9-8d4f-3e7f076c8cff
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-5sszn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  default                     hello-node-connect-7d85dfc575-cxpns          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 coredns-66bc5c9577-7f8fz                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-132054                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-zqqjn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-132054             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-132054    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-s5v8d                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-132054             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-132054 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-132054 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-132054 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-132054 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-132054 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-132054 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-132054 event: Registered Node functional-132054 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-132054 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-132054 event: Registered Node functional-132054 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-132054 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-132054 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-132054 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-132054 event: Registered Node functional-132054 in Controller
	
	
	==> dmesg <==
	[Nov19 01:56] kauditd_printk_skb: 8 callbacks suppressed
	[Nov19 01:58] overlayfs: idmapped layers are currently not supported
	[Nov19 02:04] overlayfs: idmapped layers are currently not supported
	[Nov19 02:05] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7351d78fb8a062bd0d8d611a58d7dba1cd68088423cc72ec5d52fa46ec510645] <==
	{"level":"warn","ts":"2025-11-19T02:07:16.971846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.002544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.017780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.053725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.083499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.097963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.127494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.170610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.204011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.235708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.279572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.295862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.324199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.352027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.413859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.450273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.475288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.543561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.570902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.602350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.644579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:07:17.720934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42190","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T02:17:16.160231Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1134}
	{"level":"info","ts":"2025-11-19T02:17:16.184107Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1134,"took":"23.499686ms","hash":2679795010,"current-db-size-bytes":3190784,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1404928,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-11-19T02:17:16.184158Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2679795010,"revision":1134,"compact-revision":-1}
	
	
	==> etcd [d1f689ae4813d3eb55cd1232eba7fd9e818f85c7790a1e7bee8632544e0dcbd8] <==
	{"level":"warn","ts":"2025-11-19T02:06:33.043955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:06:33.102594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:06:33.166374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:06:33.172142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:06:33.224017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:06:33.238750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:06:33.330353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38822","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T02:06:57.395286Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-19T02:06:57.395327Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-132054","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-19T02:06:57.395405Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T02:06:57.541308Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T02:06:57.541382Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T02:06:57.541403Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-19T02:06:57.541470Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-19T02:06:57.541456Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-19T02:06:57.541800Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-19T02:06:57.541838Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T02:06:57.541846Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-19T02:06:57.541967Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-19T02:06:57.542029Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T02:06:57.542045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T02:06:57.545728Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-19T02:06:57.545801Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T02:06:57.545828Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-19T02:06:57.545835Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-132054","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 02:17:54 up 10:00,  0 user,  load average: 0.04, 0.41, 0.73
	Linux functional-132054 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e2aaa967205a520722c6a7b7c81012e6d55ca4b453f2bcc1a41d61de5e677108] <==
	I1119 02:06:30.201791       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:06:30.202210       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1119 02:06:30.202403       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:06:30.202453       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:06:30.202491       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:06:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:06:30.410482       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:06:30.410549       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:06:30.410584       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:06:30.411525       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:06:34.813465       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:06:34.813591       1 metrics.go:72] Registering metrics
	I1119 02:06:34.813706       1 controller.go:711] "Syncing nftables rules"
	I1119 02:06:40.410638       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:06:40.410682       1 main.go:301] handling current node
	I1119 02:06:50.410691       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:06:50.410724       1 main.go:301] handling current node
	
	
	==> kindnet [f8fe9a43537e57e9ed15e9000c6f8be4b2a53cd173ea19d4338624f818fe65c1] <==
	I1119 02:15:50.229328       1 main.go:301] handling current node
	I1119 02:16:00.231134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:16:00.231175       1 main.go:301] handling current node
	I1119 02:16:10.229684       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:16:10.229733       1 main.go:301] handling current node
	I1119 02:16:20.237959       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:16:20.238058       1 main.go:301] handling current node
	I1119 02:16:30.229728       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:16:30.229785       1 main.go:301] handling current node
	I1119 02:16:40.229736       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:16:40.229775       1 main.go:301] handling current node
	I1119 02:16:50.234368       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:16:50.234504       1 main.go:301] handling current node
	I1119 02:17:00.231414       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:17:00.231571       1 main.go:301] handling current node
	I1119 02:17:10.230109       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:17:10.230170       1 main.go:301] handling current node
	I1119 02:17:20.238040       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:17:20.238143       1 main.go:301] handling current node
	I1119 02:17:30.229416       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:17:30.229555       1 main.go:301] handling current node
	I1119 02:17:40.236622       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:17:40.236738       1 main.go:301] handling current node
	I1119 02:17:50.233852       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 02:17:50.233885       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6c7566acffdd956420822ed3fbf6adea4f98b8602ff8df8500b88ef60d512fdf] <==
	I1119 02:07:18.879142       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 02:07:18.879250       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 02:07:18.879316       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:07:18.883578       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 02:07:18.886318       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 02:07:18.886807       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 02:07:18.886857       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 02:07:18.927592       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:07:18.969577       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:07:19.489908       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:07:19.584268       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:07:21.128970       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 02:07:21.245814       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 02:07:21.319609       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:07:21.326331       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:07:22.163998       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:07:22.211944       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:07:22.272404       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:07:37.063815       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.74.192"}
	I1119 02:07:43.090474       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.54.194"}
	I1119 02:07:52.740401       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.162.225"}
	E1119 02:08:00.766337       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54624: use of closed network connection
	E1119 02:08:09.141634       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:44992: use of closed network connection
	I1119 02:08:09.340381       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.223.169"}
	I1119 02:17:18.831201       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1d82fe177e17e0a5e7a0d6540a23f5628b8b525006b0fc799bc7e31359001691] <==
	I1119 02:07:21.929279       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 02:07:21.934480       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 02:07:21.942803       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:07:21.946842       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 02:07:21.946889       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 02:07:21.949257       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 02:07:21.955683       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 02:07:21.955782       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 02:07:21.955877       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 02:07:21.955911       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 02:07:21.955943       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 02:07:21.955974       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 02:07:21.956002       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 02:07:21.957650       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 02:07:21.957703       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 02:07:21.957993       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:07:21.958584       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 02:07:21.969731       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:07:21.981610       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 02:07:21.992938       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:07:22.011337       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:07:22.017664       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:07:22.019052       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:07:22.019124       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:07:22.019162       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [36b1780b6acbbfe4ba7c50fe3c2aa893f2bad79ab6b72cd97af1040ed42a3195] <==
	I1119 02:06:38.298313       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 02:06:38.298358       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 02:06:38.298402       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 02:06:38.299853       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 02:06:38.302675       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:06:38.302737       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:06:38.302786       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:06:38.303415       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 02:06:38.305425       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 02:06:38.306747       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:06:38.309718       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 02:06:38.310960       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 02:06:38.320206       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:06:38.333544       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:06:38.342656       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:06:38.344804       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 02:06:38.345991       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 02:06:38.346048       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 02:06:38.346009       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:06:38.346020       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 02:06:38.346034       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 02:06:38.348311       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 02:06:38.350952       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 02:06:38.355244       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 02:06:38.355253       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [056b59eaa977cafca35065788dacbbe964da008c23d67f22f31598db31c17a79] <==
	I1119 02:06:30.444563       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:06:31.371192       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:06:34.793646       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:06:34.793683       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 02:06:34.793753       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:06:36.224548       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:06:36.224683       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:06:36.556048       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:06:36.561382       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:06:36.633586       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:06:36.816972       1 config.go:200] "Starting service config controller"
	I1119 02:06:36.817001       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:06:36.817029       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:06:36.817034       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:06:36.817045       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:06:36.817049       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:06:36.841383       1 config.go:309] "Starting node config controller"
	I1119 02:06:36.841479       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:06:36.841533       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:06:36.917307       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:06:36.917419       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:06:36.917441       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [caecc76e0aff78c435b66e5adf422bfcc04312fe0388fe0c32a98dcdcc719336] <==
	I1119 02:07:20.210614       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:07:20.307085       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:07:20.407268       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:07:20.407306       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 02:07:20.407437       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:07:20.426381       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:07:20.426438       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:07:20.431314       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:07:20.431673       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:07:20.434559       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:07:20.437030       1 config.go:200] "Starting service config controller"
	I1119 02:07:20.437052       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:07:20.438140       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:07:20.438899       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:07:20.438249       1 config.go:309] "Starting node config controller"
	I1119 02:07:20.439010       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:07:20.439052       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:07:20.438672       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:07:20.439111       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:07:20.537309       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 02:07:20.539302       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:07:20.539342       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [25ce384edb28a8c629c27bd18cbcb95d507b1ec7d7963186fbb1783589b9a70d] <==
	I1119 02:06:35.576090       1 serving.go:386] Generated self-signed cert in-memory
	I1119 02:06:36.713004       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 02:06:36.713125       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:06:36.720076       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:06:36.720338       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 02:06:36.720394       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 02:06:36.720440       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 02:06:36.737210       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:06:36.737305       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:06:36.737354       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:06:36.746199       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:06:36.821613       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 02:06:36.837701       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:06:36.847182       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:06:57.398386       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1119 02:06:57.398429       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1119 02:06:57.398450       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1119 02:06:57.398478       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:06:57.398497       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:06:57.398515       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1119 02:06:57.398791       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1119 02:06:57.398820       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9fb3667e83e996d5c072d9b09efda3119f93f1ec5a5433ca0337d024d24e2032] <==
	I1119 02:07:17.168140       1 serving.go:386] Generated self-signed cert in-memory
	I1119 02:07:20.131229       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 02:07:20.131341       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:07:20.136366       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 02:07:20.136452       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 02:07:20.136500       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 02:07:20.136541       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:07:20.136557       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:07:20.136425       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:07:20.136529       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:07:20.137345       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:07:20.237709       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 02:07:20.237868       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:07:20.237924       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 02:15:10 functional-132054 kubelet[3890]: E1119 02:15:10.554274    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cxpns" podUID="538e2547-deda-48d5-b04a-0d6c91671ce1"
	Nov 19 02:15:21 functional-132054 kubelet[3890]: E1119 02:15:21.553756    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5sszn" podUID="69935075-1e20-4968-bfa5-a8d86b5091b3"
	Nov 19 02:15:24 functional-132054 kubelet[3890]: E1119 02:15:24.554797    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cxpns" podUID="538e2547-deda-48d5-b04a-0d6c91671ce1"
	Nov 19 02:15:36 functional-132054 kubelet[3890]: E1119 02:15:36.553708    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5sszn" podUID="69935075-1e20-4968-bfa5-a8d86b5091b3"
	Nov 19 02:15:38 functional-132054 kubelet[3890]: E1119 02:15:38.553490    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cxpns" podUID="538e2547-deda-48d5-b04a-0d6c91671ce1"
	Nov 19 02:15:49 functional-132054 kubelet[3890]: E1119 02:15:49.554083    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5sszn" podUID="69935075-1e20-4968-bfa5-a8d86b5091b3"
	Nov 19 02:15:52 functional-132054 kubelet[3890]: E1119 02:15:52.554340    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cxpns" podUID="538e2547-deda-48d5-b04a-0d6c91671ce1"
	Nov 19 02:16:03 functional-132054 kubelet[3890]: E1119 02:16:03.554296    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5sszn" podUID="69935075-1e20-4968-bfa5-a8d86b5091b3"
	Nov 19 02:16:06 functional-132054 kubelet[3890]: E1119 02:16:06.554531    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cxpns" podUID="538e2547-deda-48d5-b04a-0d6c91671ce1"
	Nov 19 02:16:17 functional-132054 kubelet[3890]: E1119 02:16:17.554351    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cxpns" podUID="538e2547-deda-48d5-b04a-0d6c91671ce1"
	Nov 19 02:16:17 functional-132054 kubelet[3890]: E1119 02:16:17.554386    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5sszn" podUID="69935075-1e20-4968-bfa5-a8d86b5091b3"
	Nov 19 02:16:28 functional-132054 kubelet[3890]: E1119 02:16:28.554325    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5sszn" podUID="69935075-1e20-4968-bfa5-a8d86b5091b3"
	Nov 19 02:16:30 functional-132054 kubelet[3890]: E1119 02:16:30.553738    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cxpns" podUID="538e2547-deda-48d5-b04a-0d6c91671ce1"
	Nov 19 02:16:42 functional-132054 kubelet[3890]: E1119 02:16:42.554164    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5sszn" podUID="69935075-1e20-4968-bfa5-a8d86b5091b3"
	Nov 19 02:16:42 functional-132054 kubelet[3890]: E1119 02:16:42.555439    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cxpns" podUID="538e2547-deda-48d5-b04a-0d6c91671ce1"
	Nov 19 02:16:55 functional-132054 kubelet[3890]: E1119 02:16:55.553955    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5sszn" podUID="69935075-1e20-4968-bfa5-a8d86b5091b3"
	Nov 19 02:16:56 functional-132054 kubelet[3890]: E1119 02:16:56.555614    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cxpns" podUID="538e2547-deda-48d5-b04a-0d6c91671ce1"
	Nov 19 02:17:09 functional-132054 kubelet[3890]: E1119 02:17:09.553923    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5sszn" podUID="69935075-1e20-4968-bfa5-a8d86b5091b3"
	Nov 19 02:17:11 functional-132054 kubelet[3890]: E1119 02:17:11.553850    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cxpns" podUID="538e2547-deda-48d5-b04a-0d6c91671ce1"
	Nov 19 02:17:23 functional-132054 kubelet[3890]: E1119 02:17:23.553901    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5sszn" podUID="69935075-1e20-4968-bfa5-a8d86b5091b3"
	Nov 19 02:17:26 functional-132054 kubelet[3890]: E1119 02:17:26.554134    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cxpns" podUID="538e2547-deda-48d5-b04a-0d6c91671ce1"
	Nov 19 02:17:35 functional-132054 kubelet[3890]: E1119 02:17:35.553916    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5sszn" podUID="69935075-1e20-4968-bfa5-a8d86b5091b3"
	Nov 19 02:17:37 functional-132054 kubelet[3890]: E1119 02:17:37.554094    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cxpns" podUID="538e2547-deda-48d5-b04a-0d6c91671ce1"
	Nov 19 02:17:49 functional-132054 kubelet[3890]: E1119 02:17:49.553822    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cxpns" podUID="538e2547-deda-48d5-b04a-0d6c91671ce1"
	Nov 19 02:17:50 functional-132054 kubelet[3890]: E1119 02:17:50.553776    3890 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5sszn" podUID="69935075-1e20-4968-bfa5-a8d86b5091b3"
	
	
	==> storage-provisioner [4a6d8e297509f0321e28df7ae3dc44323d3d296059b1da5df04fa56fb41c8497] <==
	W1119 02:17:30.341157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:32.343782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:32.348999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:34.353679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:34.360185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:36.363324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:36.367642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:38.371384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:38.375866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:40.379456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:40.383882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:42.387263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:42.394420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:44.398095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:44.402327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:46.404993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:46.409738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:48.413184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:48.420046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:50.423390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:50.427793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:52.430754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:52.438432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:54.442503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:17:54.451163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f94ac9106429b0c1a2d38089419b6f37093071e73fa7251e3a87330ad567afde] <==
	I1119 02:06:30.390163       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:06:34.959227       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:06:34.959269       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 02:06:35.095030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:06:38.671601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:06:42.932363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:06:46.531044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:06:49.584403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:06:52.606865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:06:52.613977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:06:52.614128       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:06:52.614285       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-132054_28e5f174-6eab-481e-8dca-a652b24afe2f!
	I1119 02:06:52.614694       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a8f27f0d-8afb-4c66-b7b0-2102b4ddc2c4", APIVersion:"v1", ResourceVersion:"565", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-132054_28e5f174-6eab-481e-8dca-a652b24afe2f became leader
	W1119 02:06:52.630438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:06:52.634494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 02:06:52.714945       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-132054_28e5f174-6eab-481e-8dca-a652b24afe2f!
	W1119 02:06:54.638034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:06:54.642372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:06:56.646058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 02:06:56.650284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-132054 -n functional-132054
helpers_test.go:269: (dbg) Run:  kubectl --context functional-132054 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-5sszn hello-node-connect-7d85dfc575-cxpns
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-132054 describe pod hello-node-75c85bcc94-5sszn hello-node-connect-7d85dfc575-cxpns
helpers_test.go:290: (dbg) kubectl --context functional-132054 describe pod hello-node-75c85bcc94-5sszn hello-node-connect-7d85dfc575-cxpns:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-5sszn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-132054/192.168.49.2
	Start Time:       Wed, 19 Nov 2025 02:08:09 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gl2d9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gl2d9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m46s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-5sszn to functional-132054
	  Normal   Pulling    6m48s (x5 over 9m46s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m48s (x5 over 9m46s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m48s (x5 over 9m46s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m33s (x20 over 9m46s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m22s (x21 over 9m46s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-cxpns
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-132054/192.168.49.2
	Start Time:       Wed, 19 Nov 2025 02:07:52 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-txqlb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-txqlb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cxpns to functional-132054
	  Normal   Pulling    7m8s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m8s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m8s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    5m1s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     5m1s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-132054 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-132054 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-5sszn" [69935075-1e20-4968-bfa5-a8d86b5091b3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1119 02:08:12.876647 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:10:29.010120 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:10:56.718877 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:15:29.009960 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-132054 -n functional-132054
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-19 02:18:09.801825547 +0000 UTC m=+1281.066418825
functional_test.go:1460: (dbg) Run:  kubectl --context functional-132054 describe po hello-node-75c85bcc94-5sszn -n default
functional_test.go:1460: (dbg) kubectl --context functional-132054 describe po hello-node-75c85bcc94-5sszn -n default:
Name:             hello-node-75c85bcc94-5sszn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-132054/192.168.49.2
Start Time:       Wed, 19 Nov 2025 02:08:09 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gl2d9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gl2d9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-5sszn to functional-132054
Normal   Pulling    7m2s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m47s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-132054 logs hello-node-75c85bcc94-5sszn -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-132054 logs hello-node-75c85bcc94-5sszn -n default: exit status 1 (134.806784ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-5sszn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-132054 logs hello-node-75c85bcc94-5sszn -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-132054 service --namespace=default --https --url hello-node: exit status 115 (478.355963ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30319
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-132054 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-132054 service hello-node --url --format={{.IP}}: exit status 115 (582.454489ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-132054 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-132054 service hello-node --url: exit status 115 (595.879561ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30319
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-132054 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30319
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image load --daemon kicbase/echo-server:functional-132054 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-132054 image load --daemon kicbase/echo-server:functional-132054 --alsologtostderr: (1.5015072s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-132054" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image load --daemon kicbase/echo-server:functional-132054 --alsologtostderr
2025/11/19 02:18:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-132054" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-132054
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image load --daemon kicbase/echo-server:functional-132054 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-132054" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image save kicbase/echo-server:functional-132054 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1119 02:18:23.797274 1493648 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:18:23.802300 1493648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:18:23.802362 1493648 out.go:374] Setting ErrFile to fd 2...
	I1119 02:18:23.802385 1493648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:18:23.802690 1493648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:18:23.803803 1493648 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:18:23.803973 1493648 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:18:23.804508 1493648 cli_runner.go:164] Run: docker container inspect functional-132054 --format={{.State.Status}}
	I1119 02:18:23.825249 1493648 ssh_runner.go:195] Run: systemctl --version
	I1119 02:18:23.825318 1493648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
	I1119 02:18:23.857679 1493648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/functional-132054/id_rsa Username:docker}
	I1119 02:18:23.981756 1493648 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1119 02:18:23.981815 1493648 cache_images.go:255] Failed to load cached images for "functional-132054": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1119 02:18:23.981837 1493648 cache_images.go:267] failed pushing to: functional-132054

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-132054
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image save --daemon kicbase/echo-server:functional-132054 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-132054
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-132054: exit status 1 (18.56949ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-132054

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-132054

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.22s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-755140 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-755140 --output=json --user=testUser: exit status 80 (2.221029281s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fd7c0089-41fe-4e27-9369-6d6456ec8cf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-755140 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"a305e571-9260-4cbb-8812-1c06b1a4a00b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-19T02:31:01Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"85c02824-893a-4716-8d35-11a326bb3dbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-755140 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.22s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-755140 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-755140 --output=json --user=testUser: exit status 80 (1.759433374s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ceed74c2-ed03-4d5c-9239-c9838988d876","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-755140 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"aef1cda5-2650-4277-abd2-e9335674228d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-19T02:31:02Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"11027852-2b37-4474-bcc9-412e3c12c732","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-755140 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.76s)

                                                
                                    
x
+
TestPause/serial/Pause (6.99s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-210634 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-210634 --alsologtostderr -v=5: exit status 80 (2.101958722s)

                                                
                                                
-- stdout --
	* Pausing node pause-210634 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:54:33.898588 1627411 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:54:33.900105 1627411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:54:33.900161 1627411 out.go:374] Setting ErrFile to fd 2...
	I1119 02:54:33.900184 1627411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:54:33.900499 1627411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:54:33.900817 1627411 out.go:368] Setting JSON to false
	I1119 02:54:33.900870 1627411 mustload.go:66] Loading cluster: pause-210634
	I1119 02:54:33.901349 1627411 config.go:182] Loaded profile config "pause-210634": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:54:33.901906 1627411 cli_runner.go:164] Run: docker container inspect pause-210634 --format={{.State.Status}}
	I1119 02:54:33.923042 1627411 host.go:66] Checking if "pause-210634" exists ...
	I1119 02:54:33.923370 1627411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:54:34.007097 1627411 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 02:54:33.99427174 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:54:34.008079 1627411 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-210634 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 02:54:34.012366 1627411 out.go:179] * Pausing node pause-210634 ... 
	I1119 02:54:34.015213 1627411 host.go:66] Checking if "pause-210634" exists ...
	I1119 02:54:34.015565 1627411 ssh_runner.go:195] Run: systemctl --version
	I1119 02:54:34.015624 1627411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:34.041101 1627411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34870 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/pause-210634/id_rsa Username:docker}
	I1119 02:54:34.144990 1627411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:54:34.166681 1627411 pause.go:52] kubelet running: true
	I1119 02:54:34.166749 1627411 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:54:34.441352 1627411 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:54:34.441441 1627411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:54:34.550264 1627411 cri.go:89] found id: "eae9c2747a6c090141bed859a946caeda9db2b858e668856f491efbcc50cc1f0"
	I1119 02:54:34.550289 1627411 cri.go:89] found id: "5b278aa8fa67a48484345b54805bb5c29e9162a5f54af5ee074fef8c5766b072"
	I1119 02:54:34.550296 1627411 cri.go:89] found id: "b48490446c0478d4b524dee6f413a7df871f5a694e492a481e6b826633cf96b5"
	I1119 02:54:34.550300 1627411 cri.go:89] found id: "003c29925c9f74027c0217cc5ade71e94414e340d291dd591f874bc578fbea1e"
	I1119 02:54:34.550314 1627411 cri.go:89] found id: "e150eb077157007e590ae1733965580b7175324548f25a32203bab129b2bd815"
	I1119 02:54:34.550322 1627411 cri.go:89] found id: "92c3ad91064be1f0a314b7990a3febf510451be94e08960b34dcdff3fadc057b"
	I1119 02:54:34.550326 1627411 cri.go:89] found id: "eb8cf828ba50c64f1cda8c35f26f210691c1cb238dd26ab0d751895dff0facef"
	I1119 02:54:34.550329 1627411 cri.go:89] found id: "1ca5597ffbe5c95f3994042247107836a869c47234e06acdc0ead2bc3dded4ac"
	I1119 02:54:34.550332 1627411 cri.go:89] found id: "b89fe3c5e3979493011dde519b29f7ae915a6bd84a62073ff542628e53e0b863"
	I1119 02:54:34.550341 1627411 cri.go:89] found id: "1cb2a0f2c8744125697bf96d494f11f98ffa7f0812d3661e3ae50c530dcb2241"
	I1119 02:54:34.550347 1627411 cri.go:89] found id: "6e6bccbb7a956f12be30b89d73d28b8866fad0012f5638de488e292311f075e7"
	I1119 02:54:34.550350 1627411 cri.go:89] found id: "e3f1e86ddd1d329884483c2ad1df7a1973076d6ec408d93655d53c17a56315e3"
	I1119 02:54:34.550353 1627411 cri.go:89] found id: "da21118c4e7ffd2ca35cc7a4a6cbace43bc77174d343ccb4fbf9ea2f65d04d5e"
	I1119 02:54:34.550356 1627411 cri.go:89] found id: "99c5fdf54f0795b8af2a7e440cbeb21a2991c76dbb380f799c0f0a3f93211efa"
	I1119 02:54:34.550359 1627411 cri.go:89] found id: ""
	I1119 02:54:34.550418 1627411 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:54:34.566066 1627411 retry.go:31] will retry after 373.837874ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:54:34Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:54:34.940693 1627411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:54:34.953916 1627411 pause.go:52] kubelet running: false
	I1119 02:54:34.954014 1627411 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:54:35.107758 1627411 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:54:35.107870 1627411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:54:35.182607 1627411 cri.go:89] found id: "eae9c2747a6c090141bed859a946caeda9db2b858e668856f491efbcc50cc1f0"
	I1119 02:54:35.182637 1627411 cri.go:89] found id: "5b278aa8fa67a48484345b54805bb5c29e9162a5f54af5ee074fef8c5766b072"
	I1119 02:54:35.182643 1627411 cri.go:89] found id: "b48490446c0478d4b524dee6f413a7df871f5a694e492a481e6b826633cf96b5"
	I1119 02:54:35.182648 1627411 cri.go:89] found id: "003c29925c9f74027c0217cc5ade71e94414e340d291dd591f874bc578fbea1e"
	I1119 02:54:35.182651 1627411 cri.go:89] found id: "e150eb077157007e590ae1733965580b7175324548f25a32203bab129b2bd815"
	I1119 02:54:35.182655 1627411 cri.go:89] found id: "92c3ad91064be1f0a314b7990a3febf510451be94e08960b34dcdff3fadc057b"
	I1119 02:54:35.182658 1627411 cri.go:89] found id: "eb8cf828ba50c64f1cda8c35f26f210691c1cb238dd26ab0d751895dff0facef"
	I1119 02:54:35.182662 1627411 cri.go:89] found id: "1ca5597ffbe5c95f3994042247107836a869c47234e06acdc0ead2bc3dded4ac"
	I1119 02:54:35.182665 1627411 cri.go:89] found id: "b89fe3c5e3979493011dde519b29f7ae915a6bd84a62073ff542628e53e0b863"
	I1119 02:54:35.182672 1627411 cri.go:89] found id: "1cb2a0f2c8744125697bf96d494f11f98ffa7f0812d3661e3ae50c530dcb2241"
	I1119 02:54:35.182676 1627411 cri.go:89] found id: "6e6bccbb7a956f12be30b89d73d28b8866fad0012f5638de488e292311f075e7"
	I1119 02:54:35.182679 1627411 cri.go:89] found id: "e3f1e86ddd1d329884483c2ad1df7a1973076d6ec408d93655d53c17a56315e3"
	I1119 02:54:35.182683 1627411 cri.go:89] found id: "da21118c4e7ffd2ca35cc7a4a6cbace43bc77174d343ccb4fbf9ea2f65d04d5e"
	I1119 02:54:35.182688 1627411 cri.go:89] found id: "99c5fdf54f0795b8af2a7e440cbeb21a2991c76dbb380f799c0f0a3f93211efa"
	I1119 02:54:35.182691 1627411 cri.go:89] found id: ""
	I1119 02:54:35.182745 1627411 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:54:35.195076 1627411 retry.go:31] will retry after 471.341799ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:54:35Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:54:35.666768 1627411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:54:35.679488 1627411 pause.go:52] kubelet running: false
	I1119 02:54:35.679562 1627411 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:54:35.826548 1627411 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:54:35.826622 1627411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:54:35.893265 1627411 cri.go:89] found id: "eae9c2747a6c090141bed859a946caeda9db2b858e668856f491efbcc50cc1f0"
	I1119 02:54:35.893287 1627411 cri.go:89] found id: "5b278aa8fa67a48484345b54805bb5c29e9162a5f54af5ee074fef8c5766b072"
	I1119 02:54:35.893292 1627411 cri.go:89] found id: "b48490446c0478d4b524dee6f413a7df871f5a694e492a481e6b826633cf96b5"
	I1119 02:54:35.893295 1627411 cri.go:89] found id: "003c29925c9f74027c0217cc5ade71e94414e340d291dd591f874bc578fbea1e"
	I1119 02:54:35.893299 1627411 cri.go:89] found id: "e150eb077157007e590ae1733965580b7175324548f25a32203bab129b2bd815"
	I1119 02:54:35.893302 1627411 cri.go:89] found id: "92c3ad91064be1f0a314b7990a3febf510451be94e08960b34dcdff3fadc057b"
	I1119 02:54:35.893305 1627411 cri.go:89] found id: "eb8cf828ba50c64f1cda8c35f26f210691c1cb238dd26ab0d751895dff0facef"
	I1119 02:54:35.893309 1627411 cri.go:89] found id: "1ca5597ffbe5c95f3994042247107836a869c47234e06acdc0ead2bc3dded4ac"
	I1119 02:54:35.893312 1627411 cri.go:89] found id: "b89fe3c5e3979493011dde519b29f7ae915a6bd84a62073ff542628e53e0b863"
	I1119 02:54:35.893319 1627411 cri.go:89] found id: "1cb2a0f2c8744125697bf96d494f11f98ffa7f0812d3661e3ae50c530dcb2241"
	I1119 02:54:35.893322 1627411 cri.go:89] found id: "6e6bccbb7a956f12be30b89d73d28b8866fad0012f5638de488e292311f075e7"
	I1119 02:54:35.893326 1627411 cri.go:89] found id: "e3f1e86ddd1d329884483c2ad1df7a1973076d6ec408d93655d53c17a56315e3"
	I1119 02:54:35.893329 1627411 cri.go:89] found id: "da21118c4e7ffd2ca35cc7a4a6cbace43bc77174d343ccb4fbf9ea2f65d04d5e"
	I1119 02:54:35.893342 1627411 cri.go:89] found id: "99c5fdf54f0795b8af2a7e440cbeb21a2991c76dbb380f799c0f0a3f93211efa"
	I1119 02:54:35.893345 1627411 cri.go:89] found id: ""
	I1119 02:54:35.893398 1627411 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:54:35.907697 1627411 out.go:203] 
	W1119 02:54:35.910704 1627411 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:54:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:54:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:54:35.910774 1627411 out.go:285] * 
	* 
	W1119 02:54:35.920425 1627411 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:54:35.923404 1627411 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-210634 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-210634
helpers_test.go:243: (dbg) docker inspect pause-210634:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d",
	        "Created": "2025-11-19T02:52:50.098081333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1621421,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:52:50.176159394Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d/hostname",
	        "HostsPath": "/var/lib/docker/containers/249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d/hosts",
	        "LogPath": "/var/lib/docker/containers/249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d/249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d-json.log",
	        "Name": "/pause-210634",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-210634:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-210634",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d",
	                "LowerDir": "/var/lib/docker/overlay2/513f633e907bad9a3090db93b009d2e7332eff159b0048afcb42b9e0d24e9037-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/513f633e907bad9a3090db93b009d2e7332eff159b0048afcb42b9e0d24e9037/merged",
	                "UpperDir": "/var/lib/docker/overlay2/513f633e907bad9a3090db93b009d2e7332eff159b0048afcb42b9e0d24e9037/diff",
	                "WorkDir": "/var/lib/docker/overlay2/513f633e907bad9a3090db93b009d2e7332eff159b0048afcb42b9e0d24e9037/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-210634",
	                "Source": "/var/lib/docker/volumes/pause-210634/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-210634",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-210634",
	                "name.minikube.sigs.k8s.io": "pause-210634",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14d149e9772ca55437a24df0fbe0158d795bf76803bd2dc0467f0edca1859d21",
	            "SandboxKey": "/var/run/docker/netns/14d149e9772c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34870"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34871"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34874"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34872"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34873"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-210634": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:b4:21:70:1d:bb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c6813d0b65e05a2d350ffe0eb6da8306397813b9881464223343b3645698449c",
	                    "EndpointID": "210cfdde01487a2968b85ea84ed19a181cc71bbf8a0aa33d51e9adc8e2011934",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-210634",
	                        "249e4b242f0b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-210634 -n pause-210634
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-210634 -n pause-210634: exit status 2 (358.154501ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-210634 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-210634 logs -n 25: (1.488364837s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-841094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:48 UTC │ 19 Nov 25 02:49 UTC │
	│ start   │ -p missing-upgrade-794811 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-794811    │ jenkins │ v1.32.0 │ 19 Nov 25 02:48 UTC │ 19 Nov 25 02:49 UTC │
	│ start   │ -p NoKubernetes-841094 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:49 UTC │
	│ delete  │ -p NoKubernetes-841094                                                                                                                   │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:49 UTC │
	│ start   │ -p NoKubernetes-841094 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:49 UTC │
	│ ssh     │ -p NoKubernetes-841094 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │                     │
	│ stop    │ -p NoKubernetes-841094                                                                                                                   │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:49 UTC │
	│ start   │ -p NoKubernetes-841094 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:49 UTC │
	│ ssh     │ -p NoKubernetes-841094 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │                     │
	│ delete  │ -p NoKubernetes-841094                                                                                                                   │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:49 UTC │
	│ start   │ -p kubernetes-upgrade-315505 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-315505 │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:50 UTC │
	│ start   │ -p missing-upgrade-794811 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-794811    │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:50 UTC │
	│ stop    │ -p kubernetes-upgrade-315505                                                                                                             │ kubernetes-upgrade-315505 │ jenkins │ v1.37.0 │ 19 Nov 25 02:50 UTC │ 19 Nov 25 02:50 UTC │
	│ start   │ -p kubernetes-upgrade-315505 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-315505 │ jenkins │ v1.37.0 │ 19 Nov 25 02:50 UTC │                     │
	│ delete  │ -p missing-upgrade-794811                                                                                                                │ missing-upgrade-794811    │ jenkins │ v1.37.0 │ 19 Nov 25 02:50 UTC │ 19 Nov 25 02:50 UTC │
	│ start   │ -p stopped-upgrade-245523 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-245523    │ jenkins │ v1.32.0 │ 19 Nov 25 02:50 UTC │ 19 Nov 25 02:51 UTC │
	│ stop    │ stopped-upgrade-245523 stop                                                                                                              │ stopped-upgrade-245523    │ jenkins │ v1.32.0 │ 19 Nov 25 02:51 UTC │ 19 Nov 25 02:51 UTC │
	│ start   │ -p stopped-upgrade-245523 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-245523    │ jenkins │ v1.37.0 │ 19 Nov 25 02:51 UTC │ 19 Nov 25 02:51 UTC │
	│ delete  │ -p stopped-upgrade-245523                                                                                                                │ stopped-upgrade-245523    │ jenkins │ v1.37.0 │ 19 Nov 25 02:51 UTC │ 19 Nov 25 02:51 UTC │
	│ start   │ -p running-upgrade-422316 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-422316    │ jenkins │ v1.32.0 │ 19 Nov 25 02:51 UTC │ 19 Nov 25 02:52 UTC │
	│ start   │ -p running-upgrade-422316 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-422316    │ jenkins │ v1.37.0 │ 19 Nov 25 02:52 UTC │ 19 Nov 25 02:52 UTC │
	│ delete  │ -p running-upgrade-422316                                                                                                                │ running-upgrade-422316    │ jenkins │ v1.37.0 │ 19 Nov 25 02:52 UTC │ 19 Nov 25 02:52 UTC │
	│ start   │ -p pause-210634 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-210634              │ jenkins │ v1.37.0 │ 19 Nov 25 02:52 UTC │ 19 Nov 25 02:54 UTC │
	│ start   │ -p pause-210634 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-210634              │ jenkins │ v1.37.0 │ 19 Nov 25 02:54 UTC │ 19 Nov 25 02:54 UTC │
	│ pause   │ -p pause-210634 --alsologtostderr -v=5                                                                                                   │ pause-210634              │ jenkins │ v1.37.0 │ 19 Nov 25 02:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:54:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:54:07.516228 1625403 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:54:07.516445 1625403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:54:07.516477 1625403 out.go:374] Setting ErrFile to fd 2...
	I1119 02:54:07.516496 1625403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:54:07.516770 1625403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:54:07.517141 1625403 out.go:368] Setting JSON to false
	I1119 02:54:07.518213 1625403 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38175,"bootTime":1763482673,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 02:54:07.518309 1625403 start.go:143] virtualization:  
	I1119 02:54:07.523607 1625403 out.go:179] * [pause-210634] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 02:54:07.526863 1625403 notify.go:221] Checking for updates...
	I1119 02:54:07.533567 1625403 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:54:07.536694 1625403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:54:07.539792 1625403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:54:07.542874 1625403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 02:54:07.545699 1625403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 02:54:07.548517 1625403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:54:07.551765 1625403 config.go:182] Loaded profile config "pause-210634": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:54:07.552322 1625403 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:54:07.597668 1625403 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 02:54:07.597778 1625403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:54:07.687121 1625403 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 02:54:07.673467353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:54:07.687218 1625403 docker.go:319] overlay module found
	I1119 02:54:07.690221 1625403 out.go:179] * Using the docker driver based on existing profile
	I1119 02:54:07.692969 1625403 start.go:309] selected driver: docker
	I1119 02:54:07.692983 1625403 start.go:930] validating driver "docker" against &{Name:pause-210634 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-210634 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:54:07.693093 1625403 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:54:07.693187 1625403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:54:07.764935 1625403 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 02:54:07.755265794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:54:07.765419 1625403 cni.go:84] Creating CNI manager for ""
	I1119 02:54:07.765483 1625403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:54:07.765533 1625403 start.go:353] cluster config:
	{Name:pause-210634 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-210634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:54:07.770684 1625403 out.go:179] * Starting "pause-210634" primary control-plane node in "pause-210634" cluster
	I1119 02:54:07.773367 1625403 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:54:07.776370 1625403 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:54:07.779137 1625403 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:54:07.779181 1625403 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 02:54:07.779190 1625403 cache.go:65] Caching tarball of preloaded images
	I1119 02:54:07.779280 1625403 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 02:54:07.779290 1625403 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:54:07.779439 1625403 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/config.json ...
	I1119 02:54:07.779645 1625403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:54:07.814084 1625403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:54:07.814103 1625403 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:54:07.814176 1625403 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:54:07.814234 1625403 start.go:360] acquireMachinesLock for pause-210634: {Name:mk19349f7139b87fee1a009db22474497ab35596 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:54:07.814335 1625403 start.go:364] duration metric: took 79.637µs to acquireMachinesLock for "pause-210634"
	I1119 02:54:07.814356 1625403 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:54:07.814409 1625403 fix.go:54] fixHost starting: 
	I1119 02:54:07.814776 1625403 cli_runner.go:164] Run: docker container inspect pause-210634 --format={{.State.Status}}
	I1119 02:54:07.843750 1625403 fix.go:112] recreateIfNeeded on pause-210634: state=Running err=<nil>
	W1119 02:54:07.843776 1625403 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 02:54:03.874342 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:33740->192.168.76.2:8443: read: connection reset by peer
	I1119 02:54:03.874407 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:03.874467 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:03.903151 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:03.903182 1608779 cri.go:89] found id: "689c473fe75d4f2b8d0567a81b8b468fc12add10ee60464c16d0c7fc0b6b067a"
	I1119 02:54:03.903187 1608779 cri.go:89] found id: ""
	I1119 02:54:03.903195 1608779 logs.go:282] 2 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79 689c473fe75d4f2b8d0567a81b8b468fc12add10ee60464c16d0c7fc0b6b067a]
	I1119 02:54:03.903252 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:03.907032 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:03.910460 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:03.910531 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:03.938541 1608779 cri.go:89] found id: ""
	I1119 02:54:03.938565 1608779 logs.go:282] 0 containers: []
	W1119 02:54:03.938573 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:03.938580 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:03.938637 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:03.967843 1608779 cri.go:89] found id: ""
	I1119 02:54:03.967868 1608779 logs.go:282] 0 containers: []
	W1119 02:54:03.967877 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:03.967884 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:03.967938 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:03.995383 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:03.995404 1608779 cri.go:89] found id: ""
	I1119 02:54:03.995412 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:03.995465 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:03.999092 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:03.999168 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:04.028162 1608779 cri.go:89] found id: ""
	I1119 02:54:04.028187 1608779 logs.go:282] 0 containers: []
	W1119 02:54:04.028196 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:04.028202 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:04.028261 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:04.055568 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:04.055591 1608779 cri.go:89] found id: "7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:04.055596 1608779 cri.go:89] found id: ""
	I1119 02:54:04.055604 1608779 logs.go:282] 2 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733]
	I1119 02:54:04.055662 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:04.059561 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:04.063468 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:04.063543 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:04.105131 1608779 cri.go:89] found id: ""
	I1119 02:54:04.105157 1608779 logs.go:282] 0 containers: []
	W1119 02:54:04.105166 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:04.105172 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:04.105237 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:04.155523 1608779 cri.go:89] found id: ""
	I1119 02:54:04.155558 1608779 logs.go:282] 0 containers: []
	W1119 02:54:04.155567 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:04.155580 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:04.155592 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:04.173626 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:04.173655 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:04.206493 1608779 logs.go:123] Gathering logs for kube-apiserver [689c473fe75d4f2b8d0567a81b8b468fc12add10ee60464c16d0c7fc0b6b067a] ...
	I1119 02:54:04.206525 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689c473fe75d4f2b8d0567a81b8b468fc12add10ee60464c16d0c7fc0b6b067a"
	I1119 02:54:04.243783 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:04.243860 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:04.284456 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:04.284536 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:04.355365 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:04.355443 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:04.507494 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:04.507571 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:04.596145 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:04.596162 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:04.596175 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:04.660531 1608779 logs.go:123] Gathering logs for kube-controller-manager [7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733] ...
	I1119 02:54:04.660565 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:04.687064 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:04.687090 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:07.218121 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:07.218602 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:07.218651 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:07.218714 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:07.244522 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:07.244548 1608779 cri.go:89] found id: ""
	I1119 02:54:07.244556 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:07.244610 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:07.248189 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:07.248256 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:07.273778 1608779 cri.go:89] found id: ""
	I1119 02:54:07.273801 1608779 logs.go:282] 0 containers: []
	W1119 02:54:07.273810 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:07.273817 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:07.273875 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:07.300796 1608779 cri.go:89] found id: ""
	I1119 02:54:07.300820 1608779 logs.go:282] 0 containers: []
	W1119 02:54:07.300829 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:07.300837 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:07.300895 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:07.326681 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:07.326703 1608779 cri.go:89] found id: ""
	I1119 02:54:07.326710 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:07.326764 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:07.330577 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:07.330662 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:07.367985 1608779 cri.go:89] found id: ""
	I1119 02:54:07.368019 1608779 logs.go:282] 0 containers: []
	W1119 02:54:07.368029 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:07.368041 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:07.368099 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:07.422627 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:07.422654 1608779 cri.go:89] found id: "7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:07.422659 1608779 cri.go:89] found id: ""
	I1119 02:54:07.422667 1608779 logs.go:282] 2 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733]
	I1119 02:54:07.422722 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:07.426865 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:07.439988 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:07.440089 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:07.479210 1608779 cri.go:89] found id: ""
	I1119 02:54:07.479229 1608779 logs.go:282] 0 containers: []
	W1119 02:54:07.479237 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:07.479244 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:07.479307 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:07.520306 1608779 cri.go:89] found id: ""
	I1119 02:54:07.520324 1608779 logs.go:282] 0 containers: []
	W1119 02:54:07.520331 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:07.520345 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:07.520356 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:07.539571 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:07.539596 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:07.585019 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:07.585297 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:07.697223 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:07.697253 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:07.779709 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:07.779734 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:07.833820 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:07.833849 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:07.975741 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:07.975831 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:08.069802 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:08.069819 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:08.069837 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:08.105675 1608779 logs.go:123] Gathering logs for kube-controller-manager [7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733] ...
	I1119 02:54:08.105701 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:07.847082 1625403 out.go:252] * Updating the running docker "pause-210634" container ...
	I1119 02:54:07.847117 1625403 machine.go:94] provisionDockerMachine start ...
	I1119 02:54:07.847212 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:07.879400 1625403 main.go:143] libmachine: Using SSH client type: native
	I1119 02:54:07.879736 1625403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34870 <nil> <nil>}
	I1119 02:54:07.879747 1625403 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:54:08.048127 1625403 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-210634
	
	I1119 02:54:08.048153 1625403 ubuntu.go:182] provisioning hostname "pause-210634"
	I1119 02:54:08.048217 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:08.066421 1625403 main.go:143] libmachine: Using SSH client type: native
	I1119 02:54:08.066725 1625403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34870 <nil> <nil>}
	I1119 02:54:08.066739 1625403 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-210634 && echo "pause-210634" | sudo tee /etc/hostname
	I1119 02:54:08.237137 1625403 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-210634
	
	I1119 02:54:08.237213 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:08.255593 1625403 main.go:143] libmachine: Using SSH client type: native
	I1119 02:54:08.255915 1625403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34870 <nil> <nil>}
	I1119 02:54:08.255938 1625403 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-210634' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-210634/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-210634' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:54:08.398897 1625403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:54:08.398921 1625403 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 02:54:08.398950 1625403 ubuntu.go:190] setting up certificates
	I1119 02:54:08.398968 1625403 provision.go:84] configureAuth start
	I1119 02:54:08.399031 1625403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-210634
	I1119 02:54:08.417175 1625403 provision.go:143] copyHostCerts
	I1119 02:54:08.417239 1625403 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 02:54:08.417256 1625403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 02:54:08.417330 1625403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 02:54:08.417422 1625403 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 02:54:08.417428 1625403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 02:54:08.417453 1625403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 02:54:08.417503 1625403 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 02:54:08.417607 1625403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 02:54:08.417647 1625403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 02:54:08.417725 1625403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.pause-210634 san=[127.0.0.1 192.168.85.2 localhost minikube pause-210634]
	I1119 02:54:08.933944 1625403 provision.go:177] copyRemoteCerts
	I1119 02:54:08.934034 1625403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:54:08.934091 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:08.951230 1625403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34870 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/pause-210634/id_rsa Username:docker}
	I1119 02:54:09.053283 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:54:09.071812 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 02:54:09.090300 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1119 02:54:09.110589 1625403 provision.go:87] duration metric: took 711.607092ms to configureAuth
	I1119 02:54:09.110614 1625403 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:54:09.110837 1625403 config.go:182] Loaded profile config "pause-210634": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:54:09.110946 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:09.131501 1625403 main.go:143] libmachine: Using SSH client type: native
	I1119 02:54:09.131823 1625403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34870 <nil> <nil>}
	I1119 02:54:09.131838 1625403 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:54:10.654380 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:10.654834 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:10.654886 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:10.654945 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:10.681415 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:10.681434 1608779 cri.go:89] found id: ""
	I1119 02:54:10.681441 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:10.681500 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:10.685275 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:10.685348 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:10.711752 1608779 cri.go:89] found id: ""
	I1119 02:54:10.711775 1608779 logs.go:282] 0 containers: []
	W1119 02:54:10.711784 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:10.711790 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:10.711847 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:10.737074 1608779 cri.go:89] found id: ""
	I1119 02:54:10.737096 1608779 logs.go:282] 0 containers: []
	W1119 02:54:10.737105 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:10.737111 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:10.737167 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:10.765047 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:10.765071 1608779 cri.go:89] found id: ""
	I1119 02:54:10.765078 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:10.765146 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:10.769104 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:10.769202 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:10.796001 1608779 cri.go:89] found id: ""
	I1119 02:54:10.796028 1608779 logs.go:282] 0 containers: []
	W1119 02:54:10.796038 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:10.796046 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:10.796108 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:10.823066 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:10.823139 1608779 cri.go:89] found id: "7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:10.823165 1608779 cri.go:89] found id: ""
	I1119 02:54:10.823186 1608779 logs.go:282] 2 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733]
	I1119 02:54:10.823258 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:10.826876 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:10.830543 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:10.830613 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:10.860044 1608779 cri.go:89] found id: ""
	I1119 02:54:10.860118 1608779 logs.go:282] 0 containers: []
	W1119 02:54:10.860140 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:10.860158 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:10.860245 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:10.891801 1608779 cri.go:89] found id: ""
	I1119 02:54:10.891869 1608779 logs.go:282] 0 containers: []
	W1119 02:54:10.891886 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:10.891902 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:10.891917 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:10.954950 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:10.954989 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:10.981705 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:10.981743 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:11.041195 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:11.041229 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:11.057694 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:11.057723 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:11.092547 1608779 logs.go:123] Gathering logs for kube-controller-manager [7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733] ...
	I1119 02:54:11.092581 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:11.120330 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:11.120360 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:11.154077 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:11.154104 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:11.273472 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:11.273518 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:11.342011 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:14.558622 1625403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:54:14.558643 1625403 machine.go:97] duration metric: took 6.711518159s to provisionDockerMachine
	I1119 02:54:14.558654 1625403 start.go:293] postStartSetup for "pause-210634" (driver="docker")
	I1119 02:54:14.558664 1625403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:54:14.558742 1625403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:54:14.558781 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:14.580903 1625403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34870 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/pause-210634/id_rsa Username:docker}
	I1119 02:54:14.681302 1625403 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:54:14.684593 1625403 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:54:14.684623 1625403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:54:14.684634 1625403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 02:54:14.684687 1625403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 02:54:14.684771 1625403 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 02:54:14.684879 1625403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:54:14.692783 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:54:14.709891 1625403 start.go:296] duration metric: took 151.221829ms for postStartSetup
	I1119 02:54:14.709968 1625403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:54:14.710006 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:14.727189 1625403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34870 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/pause-210634/id_rsa Username:docker}
	I1119 02:54:14.827425 1625403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:54:14.832624 1625403 fix.go:56] duration metric: took 7.018209344s for fixHost
	I1119 02:54:14.832647 1625403 start.go:83] releasing machines lock for "pause-210634", held for 7.01830192s
	I1119 02:54:14.832720 1625403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-210634
	I1119 02:54:14.851644 1625403 ssh_runner.go:195] Run: cat /version.json
	I1119 02:54:14.851693 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:14.851995 1625403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:54:14.852053 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:14.872590 1625403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34870 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/pause-210634/id_rsa Username:docker}
	I1119 02:54:14.873266 1625403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34870 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/pause-210634/id_rsa Username:docker}
	I1119 02:54:14.973338 1625403 ssh_runner.go:195] Run: systemctl --version
	I1119 02:54:15.075979 1625403 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:54:15.117818 1625403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:54:15.122669 1625403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:54:15.122768 1625403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:54:15.130801 1625403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:54:15.130823 1625403 start.go:496] detecting cgroup driver to use...
	I1119 02:54:15.130855 1625403 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 02:54:15.130923 1625403 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:54:15.153127 1625403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:54:15.166786 1625403 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:54:15.166861 1625403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:54:15.183722 1625403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:54:15.197753 1625403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:54:15.336036 1625403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:54:15.469376 1625403 docker.go:234] disabling docker service ...
	I1119 02:54:15.469572 1625403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:54:15.485134 1625403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:54:15.499879 1625403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:54:15.629180 1625403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:54:15.766684 1625403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:54:15.780525 1625403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:54:15.797682 1625403 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:54:15.797797 1625403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.809086 1625403 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 02:54:15.809174 1625403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.824976 1625403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.835656 1625403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.845640 1625403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:54:15.854393 1625403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.863523 1625403 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.871404 1625403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.879822 1625403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:54:15.887328 1625403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:54:15.895017 1625403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:54:16.026692 1625403 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:54:16.254563 1625403 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:54:16.254680 1625403 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:54:16.259933 1625403 start.go:564] Will wait 60s for crictl version
	I1119 02:54:16.260006 1625403 ssh_runner.go:195] Run: which crictl
	I1119 02:54:16.263589 1625403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:54:16.288074 1625403 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:54:16.288159 1625403 ssh_runner.go:195] Run: crio --version
	I1119 02:54:16.316446 1625403 ssh_runner.go:195] Run: crio --version
	I1119 02:54:16.352599 1625403 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:54:16.355631 1625403 cli_runner.go:164] Run: docker network inspect pause-210634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:54:16.371651 1625403 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 02:54:16.375541 1625403 kubeadm.go:884] updating cluster {Name:pause-210634 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-210634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:54:16.375701 1625403 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:54:16.375763 1625403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:54:16.406517 1625403 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:54:16.406540 1625403 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:54:16.406604 1625403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:54:16.435469 1625403 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:54:16.435490 1625403 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:54:16.435498 1625403 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 02:54:16.435599 1625403 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-210634 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-210634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:54:16.435684 1625403 ssh_runner.go:195] Run: crio config
	I1119 02:54:16.500449 1625403 cni.go:84] Creating CNI manager for ""
	I1119 02:54:16.500615 1625403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:54:16.500643 1625403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:54:16.500668 1625403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-210634 NodeName:pause-210634 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:54:16.500794 1625403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-210634"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:54:16.500868 1625403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:54:16.509790 1625403 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:54:16.509898 1625403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:54:16.517247 1625403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1119 02:54:16.529988 1625403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:54:16.542798 1625403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 02:54:16.555361 1625403 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:54:16.559554 1625403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:54:16.791730 1625403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:54:16.822182 1625403 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634 for IP: 192.168.85.2
	I1119 02:54:16.822242 1625403 certs.go:195] generating shared ca certs ...
	I1119 02:54:16.822273 1625403 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:54:16.822430 1625403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 02:54:16.822498 1625403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 02:54:16.822520 1625403 certs.go:257] generating profile certs ...
	I1119 02:54:16.822633 1625403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/client.key
	I1119 02:54:16.822722 1625403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/apiserver.key.465ead23
	I1119 02:54:16.822799 1625403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/proxy-client.key
	I1119 02:54:16.822964 1625403 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 02:54:16.823017 1625403 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 02:54:16.823041 1625403 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 02:54:16.823104 1625403 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 02:54:16.823153 1625403 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:54:16.823210 1625403 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 02:54:16.823278 1625403 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:54:16.823931 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:54:16.878147 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:54:16.908149 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:54:16.939152 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:54:16.970998 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 02:54:16.994829 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:54:17.019893 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:54:17.049762 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:54:17.076158 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 02:54:17.104170 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 02:54:17.158858 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:54:17.198809 1625403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:54:17.218413 1625403 ssh_runner.go:195] Run: openssl version
	I1119 02:54:17.229171 1625403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 02:54:17.250816 1625403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 02:54:17.254886 1625403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 02:54:17.254997 1625403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 02:54:17.326692 1625403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 02:54:17.338583 1625403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 02:54:17.356129 1625403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 02:54:17.370020 1625403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 02:54:17.370163 1625403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 02:54:17.450031 1625403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:54:17.461732 1625403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:54:17.479373 1625403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:54:17.483904 1625403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:54:17.484024 1625403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:54:17.548056 1625403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:54:17.561925 1625403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:54:17.570363 1625403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:54:17.678831 1625403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:54:17.763317 1625403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:54:17.851718 1625403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:54:17.927358 1625403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:54:18.005099 1625403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:54:18.085775 1625403 kubeadm.go:401] StartCluster: {Name:pause-210634 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-210634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:54:18.085957 1625403 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:54:18.086060 1625403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:54:18.174810 1625403 cri.go:89] found id: "eae9c2747a6c090141bed859a946caeda9db2b858e668856f491efbcc50cc1f0"
	I1119 02:54:18.174882 1625403 cri.go:89] found id: "5b278aa8fa67a48484345b54805bb5c29e9162a5f54af5ee074fef8c5766b072"
	I1119 02:54:18.174901 1625403 cri.go:89] found id: "b48490446c0478d4b524dee6f413a7df871f5a694e492a481e6b826633cf96b5"
	I1119 02:54:18.174937 1625403 cri.go:89] found id: "003c29925c9f74027c0217cc5ade71e94414e340d291dd591f874bc578fbea1e"
	I1119 02:54:18.174960 1625403 cri.go:89] found id: "e150eb077157007e590ae1733965580b7175324548f25a32203bab129b2bd815"
	I1119 02:54:18.174979 1625403 cri.go:89] found id: "92c3ad91064be1f0a314b7990a3febf510451be94e08960b34dcdff3fadc057b"
	I1119 02:54:18.174997 1625403 cri.go:89] found id: "eb8cf828ba50c64f1cda8c35f26f210691c1cb238dd26ab0d751895dff0facef"
	I1119 02:54:18.175015 1625403 cri.go:89] found id: "1ca5597ffbe5c95f3994042247107836a869c47234e06acdc0ead2bc3dded4ac"
	I1119 02:54:18.175041 1625403 cri.go:89] found id: "b89fe3c5e3979493011dde519b29f7ae915a6bd84a62073ff542628e53e0b863"
	I1119 02:54:18.175067 1625403 cri.go:89] found id: "1cb2a0f2c8744125697bf96d494f11f98ffa7f0812d3661e3ae50c530dcb2241"
	I1119 02:54:18.175086 1625403 cri.go:89] found id: "6e6bccbb7a956f12be30b89d73d28b8866fad0012f5638de488e292311f075e7"
	I1119 02:54:18.175103 1625403 cri.go:89] found id: "e3f1e86ddd1d329884483c2ad1df7a1973076d6ec408d93655d53c17a56315e3"
	I1119 02:54:18.175121 1625403 cri.go:89] found id: "da21118c4e7ffd2ca35cc7a4a6cbace43bc77174d343ccb4fbf9ea2f65d04d5e"
	I1119 02:54:18.175150 1625403 cri.go:89] found id: "99c5fdf54f0795b8af2a7e440cbeb21a2991c76dbb380f799c0f0a3f93211efa"
	I1119 02:54:18.175174 1625403 cri.go:89] found id: ""
	I1119 02:54:18.175258 1625403 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 02:54:18.190999 1625403 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:54:18Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:54:18.191139 1625403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:54:18.203524 1625403 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:54:18.203589 1625403 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:54:18.203676 1625403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:54:18.215496 1625403 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:54:18.218254 1625403 kubeconfig.go:125] found "pause-210634" server: "https://192.168.85.2:8443"
	I1119 02:54:18.219248 1625403 kapi.go:59] client config for pause-210634: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/client.crt", KeyFile:"/home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/client.key", CAFile:"/home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 02:54:18.219807 1625403 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 02:54:18.219845 1625403 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 02:54:18.219909 1625403 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 02:54:18.219934 1625403 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 02:54:18.219953 1625403 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 02:54:18.220264 1625403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:54:18.235515 1625403 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 02:54:18.235587 1625403 kubeadm.go:602] duration metric: took 31.979222ms to restartPrimaryControlPlane
	I1119 02:54:18.235614 1625403 kubeadm.go:403] duration metric: took 149.847853ms to StartCluster
	I1119 02:54:18.235655 1625403 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:54:18.235734 1625403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:54:18.236577 1625403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:54:18.236849 1625403 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:54:18.237250 1625403 config.go:182] Loaded profile config "pause-210634": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:54:18.237223 1625403 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:54:18.242348 1625403 out.go:179] * Enabled addons: 
	I1119 02:54:18.242477 1625403 out.go:179] * Verifying Kubernetes components...
	I1119 02:54:13.842841 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:13.843296 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:13.843354 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:13.843461 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:13.868357 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:13.868379 1608779 cri.go:89] found id: ""
	I1119 02:54:13.868387 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:13.868445 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:13.872299 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:13.872367 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:13.897565 1608779 cri.go:89] found id: ""
	I1119 02:54:13.897637 1608779 logs.go:282] 0 containers: []
	W1119 02:54:13.897672 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:13.897696 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:13.897785 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:13.922997 1608779 cri.go:89] found id: ""
	I1119 02:54:13.923020 1608779 logs.go:282] 0 containers: []
	W1119 02:54:13.923037 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:13.923044 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:13.923102 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:13.950542 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:13.950566 1608779 cri.go:89] found id: ""
	I1119 02:54:13.950574 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:13.950660 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:13.954579 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:13.954680 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:13.981689 1608779 cri.go:89] found id: ""
	I1119 02:54:13.981729 1608779 logs.go:282] 0 containers: []
	W1119 02:54:13.981739 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:13.981746 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:13.981814 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:14.011315 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:14.011340 1608779 cri.go:89] found id: "7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:14.011353 1608779 cri.go:89] found id: ""
	I1119 02:54:14.011360 1608779 logs.go:282] 2 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733]
	I1119 02:54:14.011423 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:14.015611 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:14.019704 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:14.019783 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:14.050245 1608779 cri.go:89] found id: ""
	I1119 02:54:14.050272 1608779 logs.go:282] 0 containers: []
	W1119 02:54:14.050281 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:14.050289 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:14.050381 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:14.078827 1608779 cri.go:89] found id: ""
	I1119 02:54:14.078856 1608779 logs.go:282] 0 containers: []
	W1119 02:54:14.078866 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:14.078879 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:14.078892 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:14.196212 1608779 logs.go:123] Gathering logs for kube-controller-manager [7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733] ...
	I1119 02:54:14.196251 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:14.225192 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:14.225226 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:14.284353 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:14.284389 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:14.328098 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:14.328126 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:14.348781 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:14.348809 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:14.434833 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:14.434851 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:14.434863 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:14.470708 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:14.470742 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:14.537594 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:14.537632 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:17.077624 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:17.077962 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:17.078013 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:17.078069 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:17.119575 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:17.119595 1608779 cri.go:89] found id: ""
	I1119 02:54:17.119603 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:17.119656 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:17.123311 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:17.123389 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:17.180777 1608779 cri.go:89] found id: ""
	I1119 02:54:17.180805 1608779 logs.go:282] 0 containers: []
	W1119 02:54:17.180813 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:17.180820 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:17.180875 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:17.230373 1608779 cri.go:89] found id: ""
	I1119 02:54:17.230409 1608779 logs.go:282] 0 containers: []
	W1119 02:54:17.230421 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:17.230428 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:17.230486 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:17.298996 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:17.299021 1608779 cri.go:89] found id: ""
	I1119 02:54:17.299029 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:17.299083 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:17.302576 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:17.302650 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:17.355347 1608779 cri.go:89] found id: ""
	I1119 02:54:17.355376 1608779 logs.go:282] 0 containers: []
	W1119 02:54:17.355385 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:17.355391 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:17.355453 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:17.409389 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:17.409413 1608779 cri.go:89] found id: ""
	I1119 02:54:17.409421 1608779 logs.go:282] 1 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5]
	I1119 02:54:17.409477 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:17.413102 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:17.413194 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:17.458918 1608779 cri.go:89] found id: ""
	I1119 02:54:17.458945 1608779 logs.go:282] 0 containers: []
	W1119 02:54:17.458954 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:17.458960 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:17.459020 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:17.503733 1608779 cri.go:89] found id: ""
	I1119 02:54:17.503760 1608779 logs.go:282] 0 containers: []
	W1119 02:54:17.503769 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:17.503778 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:17.503790 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:17.531637 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:17.531671 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:17.656761 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:17.656787 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:17.656800 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:17.714853 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:17.714887 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:17.813725 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:17.813761 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:17.851207 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:17.851239 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:17.931528 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:17.931563 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:17.973107 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:17.973144 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:18.245174 1625403 addons.go:515] duration metric: took 7.940568ms for enable addons: enabled=[]
	I1119 02:54:18.245292 1625403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:54:18.549656 1625403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:54:18.564616 1625403 node_ready.go:35] waiting up to 6m0s for node "pause-210634" to be "Ready" ...
	I1119 02:54:20.623219 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:20.623611 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:20.623672 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:20.623731 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:20.668938 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:20.668963 1608779 cri.go:89] found id: ""
	I1119 02:54:20.668972 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:20.669025 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:20.677924 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:20.678002 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:20.744920 1608779 cri.go:89] found id: ""
	I1119 02:54:20.744947 1608779 logs.go:282] 0 containers: []
	W1119 02:54:20.744956 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:20.744968 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:20.745026 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:20.791188 1608779 cri.go:89] found id: ""
	I1119 02:54:20.791216 1608779 logs.go:282] 0 containers: []
	W1119 02:54:20.791225 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:20.791232 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:20.791295 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:20.825816 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:20.825841 1608779 cri.go:89] found id: ""
	I1119 02:54:20.825850 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:20.825904 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:20.829824 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:20.829901 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:20.859120 1608779 cri.go:89] found id: ""
	I1119 02:54:20.859149 1608779 logs.go:282] 0 containers: []
	W1119 02:54:20.859157 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:20.859163 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:20.859225 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:20.895708 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:20.895734 1608779 cri.go:89] found id: ""
	I1119 02:54:20.895742 1608779 logs.go:282] 1 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5]
	I1119 02:54:20.895797 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:20.899460 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:20.899532 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:20.928813 1608779 cri.go:89] found id: ""
	I1119 02:54:20.928841 1608779 logs.go:282] 0 containers: []
	W1119 02:54:20.928850 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:20.928856 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:20.928913 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:20.967889 1608779 cri.go:89] found id: ""
	I1119 02:54:20.967916 1608779 logs.go:282] 0 containers: []
	W1119 02:54:20.967925 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:20.967933 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:20.967945 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:21.044181 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:21.044219 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:21.123818 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:21.123848 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:21.271050 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:21.271086 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:21.298616 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:21.298646 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:21.424810 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:21.424833 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:21.424853 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:21.476709 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:21.476744 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:21.564650 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:21.564691 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:22.641446 1625403 node_ready.go:49] node "pause-210634" is "Ready"
	I1119 02:54:22.641473 1625403 node_ready.go:38] duration metric: took 4.076786154s for node "pause-210634" to be "Ready" ...
	I1119 02:54:22.641485 1625403 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:54:22.641562 1625403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:54:22.662936 1625403 api_server.go:72] duration metric: took 4.426030907s to wait for apiserver process to appear ...
	I1119 02:54:22.662956 1625403 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:54:22.662975 1625403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:54:22.690703 1625403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:54:22.690781 1625403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:54:23.163964 1625403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:54:23.172070 1625403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:54:23.172100 1625403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:54:23.663423 1625403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:54:23.671790 1625403 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 02:54:23.672771 1625403 api_server.go:141] control plane version: v1.34.1
	I1119 02:54:23.672793 1625403 api_server.go:131] duration metric: took 1.009829153s to wait for apiserver health ...
	I1119 02:54:23.672803 1625403 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:54:23.676123 1625403 system_pods.go:59] 7 kube-system pods found
	I1119 02:54:23.676161 1625403 system_pods.go:61] "coredns-66bc5c9577-p4snv" [35d307ff-e63a-486d-9eb8-95e7cf67119f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:54:23.676170 1625403 system_pods.go:61] "etcd-pause-210634" [c521dfe2-7cf4-4b2a-9b3d-91446fe702cb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:54:23.676175 1625403 system_pods.go:61] "kindnet-w68ds" [6af1936b-8342-4b94-8c66-84cea32746ff] Running
	I1119 02:54:23.676183 1625403 system_pods.go:61] "kube-apiserver-pause-210634" [46803f27-17b1-4f8f-8e3c-4af2a69d6004] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:54:23.676192 1625403 system_pods.go:61] "kube-controller-manager-pause-210634" [d4c9f203-70b9-4b92-a92b-f36b52e83543] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:54:23.676197 1625403 system_pods.go:61] "kube-proxy-r7bhh" [d06bc070-5f4f-4e5d-9268-f0bbefdd7fdb] Running
	I1119 02:54:23.676207 1625403 system_pods.go:61] "kube-scheduler-pause-210634" [3cae9114-a64c-4a8d-98c1-3fe8dc773023] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:54:23.676213 1625403 system_pods.go:74] duration metric: took 3.403645ms to wait for pod list to return data ...
	I1119 02:54:23.676222 1625403 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:54:23.678867 1625403 default_sa.go:45] found service account: "default"
	I1119 02:54:23.678892 1625403 default_sa.go:55] duration metric: took 2.659838ms for default service account to be created ...
	I1119 02:54:23.678902 1625403 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:54:23.681758 1625403 system_pods.go:86] 7 kube-system pods found
	I1119 02:54:23.681788 1625403 system_pods.go:89] "coredns-66bc5c9577-p4snv" [35d307ff-e63a-486d-9eb8-95e7cf67119f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:54:23.681798 1625403 system_pods.go:89] "etcd-pause-210634" [c521dfe2-7cf4-4b2a-9b3d-91446fe702cb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:54:23.681805 1625403 system_pods.go:89] "kindnet-w68ds" [6af1936b-8342-4b94-8c66-84cea32746ff] Running
	I1119 02:54:23.681811 1625403 system_pods.go:89] "kube-apiserver-pause-210634" [46803f27-17b1-4f8f-8e3c-4af2a69d6004] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:54:23.681818 1625403 system_pods.go:89] "kube-controller-manager-pause-210634" [d4c9f203-70b9-4b92-a92b-f36b52e83543] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:54:23.681843 1625403 system_pods.go:89] "kube-proxy-r7bhh" [d06bc070-5f4f-4e5d-9268-f0bbefdd7fdb] Running
	I1119 02:54:23.681852 1625403 system_pods.go:89] "kube-scheduler-pause-210634" [3cae9114-a64c-4a8d-98c1-3fe8dc773023] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:54:23.681861 1625403 system_pods.go:126] duration metric: took 2.951597ms to wait for k8s-apps to be running ...
	I1119 02:54:23.681869 1625403 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:54:23.681929 1625403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:54:23.697344 1625403 system_svc.go:56] duration metric: took 15.463374ms WaitForService to wait for kubelet
	I1119 02:54:23.697421 1625403 kubeadm.go:587] duration metric: took 5.4605215s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:54:23.697454 1625403 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:54:23.699981 1625403 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 02:54:23.700015 1625403 node_conditions.go:123] node cpu capacity is 2
	I1119 02:54:23.700028 1625403 node_conditions.go:105] duration metric: took 2.553356ms to run NodePressure ...
	I1119 02:54:23.700040 1625403 start.go:242] waiting for startup goroutines ...
	I1119 02:54:23.700076 1625403 start.go:247] waiting for cluster config update ...
	I1119 02:54:23.700090 1625403 start.go:256] writing updated cluster config ...
	I1119 02:54:23.700403 1625403 ssh_runner.go:195] Run: rm -f paused
	I1119 02:54:23.703908 1625403 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:54:23.704565 1625403 kapi.go:59] client config for pause-210634: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/client.crt", KeyFile:"/home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/client.key", CAFile:"/home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 02:54:23.707673 1625403 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p4snv" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:54:25.714007 1625403 pod_ready.go:104] pod "coredns-66bc5c9577-p4snv" is not "Ready", error: <nil>
	I1119 02:54:24.107928 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:24.108400 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:24.108466 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:24.108550 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:24.144082 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:24.144105 1608779 cri.go:89] found id: ""
	I1119 02:54:24.144114 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:24.144167 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:24.148124 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:24.148192 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:24.180967 1608779 cri.go:89] found id: ""
	I1119 02:54:24.181039 1608779 logs.go:282] 0 containers: []
	W1119 02:54:24.181062 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:24.181085 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:24.181172 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:24.211093 1608779 cri.go:89] found id: ""
	I1119 02:54:24.211166 1608779 logs.go:282] 0 containers: []
	W1119 02:54:24.211189 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:24.211211 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:24.211297 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:24.243910 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:24.243982 1608779 cri.go:89] found id: ""
	I1119 02:54:24.244004 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:24.244093 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:24.248368 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:24.248492 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:24.277165 1608779 cri.go:89] found id: ""
	I1119 02:54:24.277237 1608779 logs.go:282] 0 containers: []
	W1119 02:54:24.277259 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:24.277282 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:24.277370 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:24.309612 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:24.309633 1608779 cri.go:89] found id: ""
	I1119 02:54:24.309642 1608779 logs.go:282] 1 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5]
	I1119 02:54:24.309699 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:24.313350 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:24.313449 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:24.340551 1608779 cri.go:89] found id: ""
	I1119 02:54:24.340575 1608779 logs.go:282] 0 containers: []
	W1119 02:54:24.340584 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:24.340591 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:24.340651 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:24.368490 1608779 cri.go:89] found id: ""
	I1119 02:54:24.368516 1608779 logs.go:282] 0 containers: []
	W1119 02:54:24.368525 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:24.368533 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:24.368545 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:24.403191 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:24.403218 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:24.467457 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:24.467493 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:24.501649 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:24.501727 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:24.624310 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:24.624409 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:24.654517 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:24.654543 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:24.761016 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:24.761086 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:24.761114 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:24.803855 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:24.803886 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:27.395020 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:27.395391 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:27.395430 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:27.395482 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:27.427782 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:27.427805 1608779 cri.go:89] found id: ""
	I1119 02:54:27.427814 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:27.427872 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:27.431761 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:27.431831 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:27.461796 1608779 cri.go:89] found id: ""
	I1119 02:54:27.461819 1608779 logs.go:282] 0 containers: []
	W1119 02:54:27.461827 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:27.461834 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:27.461894 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:27.488706 1608779 cri.go:89] found id: ""
	I1119 02:54:27.488730 1608779 logs.go:282] 0 containers: []
	W1119 02:54:27.488739 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:27.488746 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:27.488809 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:27.519578 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:27.519599 1608779 cri.go:89] found id: ""
	I1119 02:54:27.519607 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:27.519662 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:27.523500 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:27.523575 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:27.550324 1608779 cri.go:89] found id: ""
	I1119 02:54:27.550348 1608779 logs.go:282] 0 containers: []
	W1119 02:54:27.550357 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:27.550363 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:27.550434 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:27.576740 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:27.576761 1608779 cri.go:89] found id: ""
	I1119 02:54:27.576769 1608779 logs.go:282] 1 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5]
	I1119 02:54:27.576825 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:27.580494 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:27.580568 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:27.607017 1608779 cri.go:89] found id: ""
	I1119 02:54:27.607046 1608779 logs.go:282] 0 containers: []
	W1119 02:54:27.607054 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:27.607061 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:27.607119 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:27.633257 1608779 cri.go:89] found id: ""
	I1119 02:54:27.633279 1608779 logs.go:282] 0 containers: []
	W1119 02:54:27.633288 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:27.633297 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:27.633309 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:27.665454 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:27.665486 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:27.727524 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:27.727562 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:27.755805 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:27.755833 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:27.817749 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:27.817787 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:27.847474 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:27.847499 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:27.971954 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:27.971993 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:27.988612 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:27.988641 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:28.067650 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1119 02:54:28.213471 1625403 pod_ready.go:104] pod "coredns-66bc5c9577-p4snv" is not "Ready", error: <nil>
	I1119 02:54:28.713244 1625403 pod_ready.go:94] pod "coredns-66bc5c9577-p4snv" is "Ready"
	I1119 02:54:28.713276 1625403 pod_ready.go:86] duration metric: took 5.005575219s for pod "coredns-66bc5c9577-p4snv" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:28.715594 1625403 pod_ready.go:83] waiting for pod "etcd-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:28.719625 1625403 pod_ready.go:94] pod "etcd-pause-210634" is "Ready"
	I1119 02:54:28.719651 1625403 pod_ready.go:86] duration metric: took 4.030244ms for pod "etcd-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:28.721840 1625403 pod_ready.go:83] waiting for pod "kube-apiserver-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:29.227893 1625403 pod_ready.go:94] pod "kube-apiserver-pause-210634" is "Ready"
	I1119 02:54:29.227922 1625403 pod_ready.go:86] duration metric: took 506.062135ms for pod "kube-apiserver-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:29.230199 1625403 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:54:31.236761 1625403 pod_ready.go:104] pod "kube-controller-manager-pause-210634" is not "Ready", error: <nil>
	I1119 02:54:30.567815 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:30.568261 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:30.568307 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:30.568361 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:30.597872 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:30.597894 1608779 cri.go:89] found id: ""
	I1119 02:54:30.597902 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:30.597961 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:30.601540 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:30.601613 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:30.628537 1608779 cri.go:89] found id: ""
	I1119 02:54:30.628560 1608779 logs.go:282] 0 containers: []
	W1119 02:54:30.628569 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:30.628575 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:30.628682 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:30.654029 1608779 cri.go:89] found id: ""
	I1119 02:54:30.654058 1608779 logs.go:282] 0 containers: []
	W1119 02:54:30.654068 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:30.654074 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:30.654153 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:30.686790 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:30.686815 1608779 cri.go:89] found id: ""
	I1119 02:54:30.686823 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:30.686879 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:30.690711 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:30.690789 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:30.718829 1608779 cri.go:89] found id: ""
	I1119 02:54:30.718858 1608779 logs.go:282] 0 containers: []
	W1119 02:54:30.718866 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:30.718872 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:30.718947 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:30.753424 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:30.753447 1608779 cri.go:89] found id: ""
	I1119 02:54:30.753456 1608779 logs.go:282] 1 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5]
	I1119 02:54:30.753533 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:30.757101 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:30.757169 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:30.787760 1608779 cri.go:89] found id: ""
	I1119 02:54:30.787786 1608779 logs.go:282] 0 containers: []
	W1119 02:54:30.787795 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:30.787802 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:30.787862 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:30.815283 1608779 cri.go:89] found id: ""
	I1119 02:54:30.815305 1608779 logs.go:282] 0 containers: []
	W1119 02:54:30.815314 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:30.815323 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:30.815335 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:30.881640 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:30.881675 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:30.912471 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:30.912496 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:30.974504 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:30.974548 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:31.014243 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:31.014272 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:31.131704 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:31.131740 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:31.150817 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:31.150846 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:31.215502 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:31.215520 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:31.215533 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:32.735553 1625403 pod_ready.go:94] pod "kube-controller-manager-pause-210634" is "Ready"
	I1119 02:54:32.735581 1625403 pod_ready.go:86] duration metric: took 3.505358631s for pod "kube-controller-manager-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:32.737644 1625403 pod_ready.go:83] waiting for pod "kube-proxy-r7bhh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:32.741444 1625403 pod_ready.go:94] pod "kube-proxy-r7bhh" is "Ready"
	I1119 02:54:32.741469 1625403 pod_ready.go:86] duration metric: took 3.804101ms for pod "kube-proxy-r7bhh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:32.911515 1625403 pod_ready.go:83] waiting for pod "kube-scheduler-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:33.715489 1625403 pod_ready.go:94] pod "kube-scheduler-pause-210634" is "Ready"
	I1119 02:54:33.715515 1625403 pod_ready.go:86] duration metric: took 803.971383ms for pod "kube-scheduler-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:33.715528 1625403 pod_ready.go:40] duration metric: took 10.011589172s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:54:33.771353 1625403 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 02:54:33.774597 1625403 out.go:179] * Done! kubectl is now configured to use "pause-210634" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.766665723Z" level=info msg="Started container" PID=2212 containerID=92c3ad91064be1f0a314b7990a3febf510451be94e08960b34dcdff3fadc057b description=kube-system/kindnet-w68ds/kindnet-cni id=6387e877-3242-4635-aef7-6620a1c8c3ee name=/runtime.v1.RuntimeService/StartContainer sandboxID=16a6f497be353e42e0a9e3822edab92b277cae5ee3de753f7104bbc9b5d04a25
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.772034646Z" level=info msg="Started container" PID=2213 containerID=e150eb077157007e590ae1733965580b7175324548f25a32203bab129b2bd815 description=kube-system/kube-proxy-r7bhh/kube-proxy id=93aae6c5-8138-4b82-b190-76fdc78b0d42 name=/runtime.v1.RuntimeService/StartContainer sandboxID=77e448635c6a0676d11bffb2e4595fcbf2668d1f5d96636f3e311e4aea44929e
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.778917791Z" level=info msg="Started container" PID=2232 containerID=5b278aa8fa67a48484345b54805bb5c29e9162a5f54af5ee074fef8c5766b072 description=kube-system/kube-scheduler-pause-210634/kube-scheduler id=611ae272-8797-4249-890e-afa9735011e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=783c70939f2a35787b02debaceb8d0f23df00b1f31624f27110ee5917dd6dc85
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.812958058Z" level=info msg="Created container b48490446c0478d4b524dee6f413a7df871f5a694e492a481e6b826633cf96b5: kube-system/coredns-66bc5c9577-p4snv/coredns" id=2044a8ff-98b3-4977-877d-e9a568fb5333 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.818696456Z" level=info msg="Started container" PID=2225 containerID=003c29925c9f74027c0217cc5ade71e94414e340d291dd591f874bc578fbea1e description=kube-system/kube-controller-manager-pause-210634/kube-controller-manager id=88c0b744-4886-4d8f-90c0-0fb31d7b16bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b99f0db290c54c8134fe204bc86fa20ce1f5440f1d8d92051b9aa036d497ff3
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.839247029Z" level=info msg="Starting container: b48490446c0478d4b524dee6f413a7df871f5a694e492a481e6b826633cf96b5" id=9789803c-2f2b-4cee-8e30-1dbde1ce7ca9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.850113003Z" level=info msg="Started container" PID=2248 containerID=b48490446c0478d4b524dee6f413a7df871f5a694e492a481e6b826633cf96b5 description=kube-system/coredns-66bc5c9577-p4snv/coredns id=9789803c-2f2b-4cee-8e30-1dbde1ce7ca9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=708923b24f536ea58db271fa3cbd6db0c549da26783bb4063eae647c13d139d3
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.85676486Z" level=info msg="Created container eae9c2747a6c090141bed859a946caeda9db2b858e668856f491efbcc50cc1f0: kube-system/kube-apiserver-pause-210634/kube-apiserver" id=76dc1f4d-0d24-4884-8d2f-651382d51a88 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.860434803Z" level=info msg="Starting container: eae9c2747a6c090141bed859a946caeda9db2b858e668856f491efbcc50cc1f0" id=6777420d-7a11-42b7-908f-b7dfe9a2d366 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.862351754Z" level=info msg="Started container" PID=2249 containerID=eae9c2747a6c090141bed859a946caeda9db2b858e668856f491efbcc50cc1f0 description=kube-system/kube-apiserver-pause-210634/kube-apiserver id=6777420d-7a11-42b7-908f-b7dfe9a2d366 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e654c8c2052deacd645dbf93335aa0a3412e8cfc857e0999d788742442c77f42
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.096065111Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.099402042Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.099437225Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.099459378Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.102365273Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.102403221Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.102426252Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.105487367Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.105654263Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.105689421Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.108418419Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.108446643Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.108469445Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.111332363Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.111393186Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	eae9c2747a6c0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago       Running             kube-apiserver            1                   e654c8c2052de       kube-apiserver-pause-210634            kube-system
	5b278aa8fa67a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago       Running             kube-scheduler            1                   783c70939f2a3       kube-scheduler-pause-210634            kube-system
	b48490446c047       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   20 seconds ago       Running             coredns                   1                   708923b24f536       coredns-66bc5c9577-p4snv               kube-system
	003c29925c9f7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   20 seconds ago       Running             kube-controller-manager   1                   1b99f0db290c5       kube-controller-manager-pause-210634   kube-system
	e150eb0771570       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   20 seconds ago       Running             kube-proxy                1                   77e448635c6a0       kube-proxy-r7bhh                       kube-system
	92c3ad91064be       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   20 seconds ago       Running             kindnet-cni               1                   16a6f497be353       kindnet-w68ds                          kube-system
	eb8cf828ba50c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   20 seconds ago       Running             etcd                      1                   228c0822f24bb       etcd-pause-210634                      kube-system
	1ca5597ffbe5c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   32 seconds ago       Exited              coredns                   0                   708923b24f536       coredns-66bc5c9577-p4snv               kube-system
	b89fe3c5e3979       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   77e448635c6a0       kube-proxy-r7bhh                       kube-system
	1cb2a0f2c8744       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   16a6f497be353       kindnet-w68ds                          kube-system
	6e6bccbb7a956       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   1b99f0db290c5       kube-controller-manager-pause-210634   kube-system
	e3f1e86ddd1d3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   783c70939f2a3       kube-scheduler-pause-210634            kube-system
	da21118c4e7ff       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   228c0822f24bb       etcd-pause-210634                      kube-system
	99c5fdf54f079       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   e654c8c2052de       kube-apiserver-pause-210634            kube-system
	
	
	==> coredns [1ca5597ffbe5c95f3994042247107836a869c47234e06acdc0ead2bc3dded4ac] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38751 - 18190 "HINFO IN 2228662234522525700.1617383602449744944. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003946604s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b48490446c0478d4b524dee6f413a7df871f5a694e492a481e6b826633cf96b5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40405 - 35084 "HINFO IN 2104345973068509918.4265233509804500088. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003799409s
	
	
	==> describe nodes <==
	Name:               pause-210634
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-210634
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=pause-210634
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_53_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:53:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-210634
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:54:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:54:20 +0000   Wed, 19 Nov 2025 02:53:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:54:20 +0000   Wed, 19 Nov 2025 02:53:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:54:20 +0000   Wed, 19 Nov 2025 02:53:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:54:20 +0000   Wed, 19 Nov 2025 02:54:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-210634
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                282827d6-0439-4be2-8cf8-d4d9944eb954
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-p4snv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     74s
	  kube-system                 etcd-pause-210634                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         79s
	  kube-system                 kindnet-w68ds                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      74s
	  kube-system                 kube-apiserver-pause-210634             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-210634    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-r7bhh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-pause-210634             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 71s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   NodeHasSufficientPID     87s (x8 over 87s)  kubelet          Node pause-210634 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 87s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  87s (x8 over 87s)  kubelet          Node pause-210634 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    87s (x8 over 87s)  kubelet          Node pause-210634 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 87s                kubelet          Starting kubelet.
	  Normal   Starting                 79s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 79s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  79s                kubelet          Node pause-210634 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s                kubelet          Node pause-210634 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s                kubelet          Node pause-210634 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           75s                node-controller  Node pause-210634 event: Registered Node pause-210634 in Controller
	  Normal   NodeReady                33s                kubelet          Node pause-210634 status is now: NodeReady
	  Normal   RegisteredNode           13s                node-controller  Node pause-210634 event: Registered Node pause-210634 in Controller
	
	
	==> dmesg <==
	[Nov19 02:25] overlayfs: idmapped layers are currently not supported
	[ +42.421073] overlayfs: idmapped layers are currently not supported
	[Nov19 02:27] overlayfs: idmapped layers are currently not supported
	[  +3.136079] overlayfs: idmapped layers are currently not supported
	[ +45.971049] overlayfs: idmapped layers are currently not supported
	[Nov19 02:28] overlayfs: idmapped layers are currently not supported
	[Nov19 02:30] overlayfs: idmapped layers are currently not supported
	[Nov19 02:35] overlayfs: idmapped layers are currently not supported
	[ +37.747558] overlayfs: idmapped layers are currently not supported
	[Nov19 02:37] overlayfs: idmapped layers are currently not supported
	[Nov19 02:38] overlayfs: idmapped layers are currently not supported
	[Nov19 02:39] overlayfs: idmapped layers are currently not supported
	[Nov19 02:41] overlayfs: idmapped layers are currently not supported
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [da21118c4e7ffd2ca35cc7a4a6cbace43bc77174d343ccb4fbf9ea2f65d04d5e] <==
	{"level":"warn","ts":"2025-11-19T02:53:14.461037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:53:14.476699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:53:14.490701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:53:14.517224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:53:14.534171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:53:14.546929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:53:14.611057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41792","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T02:54:09.301606Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-19T02:54:09.301685Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-210634","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-19T02:54:09.301824Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T02:54:09.580335Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T02:54:09.580408Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T02:54:09.580430Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-19T02:54:09.580484Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-19T02:54:09.580546Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-19T02:54:09.580640Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T02:54:09.580683Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-19T02:54:09.580642Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-19T02:54:09.580707Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T02:54:09.580716Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T02:54:09.580590Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-19T02:54:09.583937Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-19T02:54:09.584019Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T02:54:09.584052Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-19T02:54:09.584061Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-210634","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [eb8cf828ba50c64f1cda8c35f26f210691c1cb238dd26ab0d751895dff0facef] <==
	{"level":"warn","ts":"2025-11-19T02:54:20.788336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:20.873709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:20.926607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:20.966380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.027213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.101290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.150436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.184847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.199365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.220493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.267073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.309906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.422379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.440464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.458828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.488246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.519781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.552841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.584395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.611606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.628411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.668849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.682219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.698072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.781181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33634","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:54:37 up 10:36,  0 user,  load average: 2.70, 2.60, 2.14
	Linux pause-210634 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cb2a0f2c8744125697bf96d494f11f98ffa7f0812d3661e3ae50c530dcb2241] <==
	I1119 02:53:23.784691       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:53:23.784964       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 02:53:23.785100       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:53:23.785112       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:53:23.785126       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:53:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:53:23.944696       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:53:23.944777       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:53:23.944827       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:53:23.945305       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 02:53:53.945671       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 02:53:53.945719       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 02:53:53.950183       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 02:53:53.950331       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 02:53:55.545463       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:53:55.545495       1 metrics.go:72] Registering metrics
	I1119 02:53:55.545589       1 controller.go:711] "Syncing nftables rules"
	I1119 02:54:03.949593       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:54:03.949642       1 main.go:301] handling current node
	
	
	==> kindnet [92c3ad91064be1f0a314b7990a3febf510451be94e08960b34dcdff3fadc057b] <==
	I1119 02:54:16.832094       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:54:16.833008       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 02:54:16.833250       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:54:16.833320       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:54:16.833359       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:54:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:54:17.095225       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:54:17.095245       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:54:17.095268       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1119 02:54:17.111707       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 02:54:17.111801       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1119 02:54:17.111864       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 02:54:17.111925       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1119 02:54:17.112021       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:54:22.695753       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:54:22.695790       1 metrics.go:72] Registering metrics
	I1119 02:54:22.695851       1 controller.go:711] "Syncing nftables rules"
	I1119 02:54:27.095715       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:54:27.095771       1 main.go:301] handling current node
	I1119 02:54:37.097587       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:54:37.097724       1 main.go:301] handling current node
	
	
	==> kube-apiserver [99c5fdf54f0795b8af2a7e440cbeb21a2991c76dbb380f799c0f0a3f93211efa] <==
	W1119 02:54:09.314761       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.314814       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.314861       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.314912       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.314976       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.315028       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.315083       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.316306       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.316359       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.316395       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.316432       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319052       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319121       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319171       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319327       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319376       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319443       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319490       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319879       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319926       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319966       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.320222       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.320268       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.320451       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [eae9c2747a6c090141bed859a946caeda9db2b858e668856f491efbcc50cc1f0] <==
	I1119 02:54:22.648325       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 02:54:22.648333       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:54:22.653705       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 02:54:22.666122       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 02:54:22.666233       1 policy_source.go:240] refreshing policies
	I1119 02:54:22.715474       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:54:22.720841       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 02:54:22.720938       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 02:54:22.721601       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 02:54:22.727278       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 02:54:22.727707       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 02:54:22.727956       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 02:54:22.727994       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 02:54:22.728318       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 02:54:22.730116       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 02:54:22.730241       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:54:22.732917       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:54:22.752983       1 cache.go:39] Caches are synced for autoregister controller
	E1119 02:54:22.756044       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 02:54:23.326598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:54:23.586766       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:54:25.004404       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:54:25.053139       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:54:25.203543       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:54:25.353750       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [003c29925c9f74027c0217cc5ade71e94414e340d291dd591f874bc578fbea1e] <==
	I1119 02:54:24.986766       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:54:24.994881       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:54:24.995090       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 02:54:24.996153       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 02:54:24.996207       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 02:54:24.996251       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 02:54:24.996277       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 02:54:24.996304       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 02:54:24.996336       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:54:24.996398       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:54:24.996469       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-210634"
	I1119 02:54:24.996505       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 02:54:24.996544       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 02:54:24.996708       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 02:54:24.997332       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 02:54:24.999059       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 02:54:24.999184       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:54:25.004236       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 02:54:25.004614       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:54:25.004690       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:54:25.004722       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:54:25.004754       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:54:25.023164       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:54:25.030561       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:54:25.033944       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [6e6bccbb7a956f12be30b89d73d28b8866fad0012f5638de488e292311f075e7] <==
	I1119 02:53:22.291501       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 02:53:22.293775       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 02:53:22.293828       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:53:22.293855       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:53:22.293869       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:53:22.293875       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:53:22.301965       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 02:53:22.302420       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-210634" podCIDRs=["10.244.0.0/24"]
	I1119 02:53:22.303673       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 02:53:22.312273       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 02:53:22.313568       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 02:53:22.322911       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:53:22.322961       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 02:53:22.331430       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 02:53:22.331560       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:53:22.333231       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:53:22.333259       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:53:22.333284       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:53:22.333334       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 02:53:22.333359       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:53:22.333406       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 02:53:22.333573       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 02:53:22.334381       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 02:53:22.336147       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:54:07.242415       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b89fe3c5e3979493011dde519b29f7ae915a6bd84a62073ff542628e53e0b863] <==
	I1119 02:53:25.102212       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:53:25.195016       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:53:25.295119       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:53:25.295155       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 02:53:25.295241       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:53:25.312879       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:53:25.312927       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:53:25.316437       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:53:25.316773       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:53:25.316848       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:53:25.320042       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:53:25.320124       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:53:25.320449       1 config.go:200] "Starting service config controller"
	I1119 02:53:25.320492       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:53:25.320788       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:53:25.325719       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:53:25.326301       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:53:25.321086       1 config.go:309] "Starting node config controller"
	I1119 02:53:25.326323       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:53:25.326328       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:53:25.421189       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:53:25.421280       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [e150eb077157007e590ae1733965580b7175324548f25a32203bab129b2bd815] <==
	I1119 02:54:18.216586       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:54:19.908039       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:54:22.763805       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:54:22.763912       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 02:54:22.764037       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:54:22.793239       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:54:22.793304       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:54:22.811993       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:54:22.812372       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:54:22.812603       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:54:22.814022       1 config.go:200] "Starting service config controller"
	I1119 02:54:22.814089       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:54:22.814108       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:54:22.814114       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:54:22.814140       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:54:22.814156       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:54:22.817796       1 config.go:309] "Starting node config controller"
	I1119 02:54:22.817867       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:54:22.817898       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:54:22.915070       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:54:22.915110       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:54:22.915078       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5b278aa8fa67a48484345b54805bb5c29e9162a5f54af5ee074fef8c5766b072] <==
	I1119 02:54:19.921478       1 serving.go:386] Generated self-signed cert in-memory
	W1119 02:54:22.605585       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 02:54:22.605681       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 02:54:22.605716       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 02:54:22.605744       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 02:54:22.692638       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 02:54:22.692755       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:54:22.694844       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:54:22.694941       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:54:22.702773       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:54:22.702924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 02:54:22.798150       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e3f1e86ddd1d329884483c2ad1df7a1973076d6ec408d93655d53c17a56315e3] <==
	E1119 02:53:15.427446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:53:15.427513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:53:15.427628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:53:15.427704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:53:15.428357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:53:16.265643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:53:16.347973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:53:16.350436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 02:53:16.380950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:53:16.386436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:53:16.417121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:53:16.443616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:53:16.479547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:53:16.537795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:53:16.607410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:53:16.629869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:53:16.630004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:53:16.649763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1119 02:53:19.566799       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:54:09.308468       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1119 02:54:09.308596       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1119 02:54:09.308608       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1119 02:54:09.308625       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:54:09.308880       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1119 02:54:09.308897       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.545046    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d320cc8614c047ff979ab73a2d0c54ae" pod="kube-system/etcd-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.545364    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="7ada54b88577f8537950907f13b1cc63" pod="kube-system/kube-controller-manager-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.545749    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="65cf838043746d121fddeeac147c794c" pod="kube-system/kube-apiserver-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.546103    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-w68ds\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6af1936b-8342-4b94-8c66-84cea32746ff" pod="kube-system/kindnet-w68ds"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.546444    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r7bhh\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d06bc070-5f4f-4e5d-9268-f0bbefdd7fdb" pod="kube-system/kube-proxy-r7bhh"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.547027    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-p4snv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="35d307ff-e63a-486d-9eb8-95e7cf67119f" pod="kube-system/coredns-66bc5c9577-p4snv"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: I1119 02:54:16.551297    1321 scope.go:117] "RemoveContainer" containerID="99c5fdf54f0795b8af2a7e440cbeb21a2991c76dbb380f799c0f0a3f93211efa"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.551962    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="79bb44ecbd60873c555aabcdc1b97eff" pod="kube-system/kube-scheduler-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.552287    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d320cc8614c047ff979ab73a2d0c54ae" pod="kube-system/etcd-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.552616    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="7ada54b88577f8537950907f13b1cc63" pod="kube-system/kube-controller-manager-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.553428    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="65cf838043746d121fddeeac147c794c" pod="kube-system/kube-apiserver-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.553855    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-w68ds\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6af1936b-8342-4b94-8c66-84cea32746ff" pod="kube-system/kindnet-w68ds"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.554559    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r7bhh\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d06bc070-5f4f-4e5d-9268-f0bbefdd7fdb" pod="kube-system/kube-proxy-r7bhh"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.555756    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-p4snv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="35d307ff-e63a-486d-9eb8-95e7cf67119f" pod="kube-system/coredns-66bc5c9577-p4snv"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.376642    1321 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-210634\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.377002    1321 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-210634\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.377177    1321 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-w68ds\" is forbidden: User \"system:node:pause-210634\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" podUID="6af1936b-8342-4b94-8c66-84cea32746ff" pod="kube-system/kindnet-w68ds"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.378022    1321 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-210634\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.447401    1321 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-r7bhh\" is forbidden: User \"system:node:pause-210634\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" podUID="d06bc070-5f4f-4e5d-9268-f0bbefdd7fdb" pod="kube-system/kube-proxy-r7bhh"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.624904    1321 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-p4snv\" is forbidden: User \"system:node:pause-210634\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" podUID="35d307ff-e63a-486d-9eb8-95e7cf67119f" pod="kube-system/coredns-66bc5c9577-p4snv"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.642948    1321 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-210634\" is forbidden: User \"system:node:pause-210634\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" podUID="79bb44ecbd60873c555aabcdc1b97eff" pod="kube-system/kube-scheduler-pause-210634"
	Nov 19 02:54:28 pause-210634 kubelet[1321]: W1119 02:54:28.582548    1321 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 19 02:54:34 pause-210634 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:54:34 pause-210634 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:54:34 pause-210634 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-210634 -n pause-210634
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-210634 -n pause-210634: exit status 2 (419.429056ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-210634 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-210634
helpers_test.go:243: (dbg) docker inspect pause-210634:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d",
	        "Created": "2025-11-19T02:52:50.098081333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1621421,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:52:50.176159394Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d/hostname",
	        "HostsPath": "/var/lib/docker/containers/249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d/hosts",
	        "LogPath": "/var/lib/docker/containers/249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d/249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d-json.log",
	        "Name": "/pause-210634",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-210634:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-210634",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "249e4b242f0b17479157650f712844fbdd0c7142b9018c81418642ebff1bdf0d",
	                "LowerDir": "/var/lib/docker/overlay2/513f633e907bad9a3090db93b009d2e7332eff159b0048afcb42b9e0d24e9037-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/513f633e907bad9a3090db93b009d2e7332eff159b0048afcb42b9e0d24e9037/merged",
	                "UpperDir": "/var/lib/docker/overlay2/513f633e907bad9a3090db93b009d2e7332eff159b0048afcb42b9e0d24e9037/diff",
	                "WorkDir": "/var/lib/docker/overlay2/513f633e907bad9a3090db93b009d2e7332eff159b0048afcb42b9e0d24e9037/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-210634",
	                "Source": "/var/lib/docker/volumes/pause-210634/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-210634",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-210634",
	                "name.minikube.sigs.k8s.io": "pause-210634",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14d149e9772ca55437a24df0fbe0158d795bf76803bd2dc0467f0edca1859d21",
	            "SandboxKey": "/var/run/docker/netns/14d149e9772c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34870"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34871"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34874"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34872"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34873"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-210634": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:b4:21:70:1d:bb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c6813d0b65e05a2d350ffe0eb6da8306397813b9881464223343b3645698449c",
	                    "EndpointID": "210cfdde01487a2968b85ea84ed19a181cc71bbf8a0aa33d51e9adc8e2011934",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-210634",
	                        "249e4b242f0b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-210634 -n pause-210634
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-210634 -n pause-210634: exit status 2 (347.29459ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-210634 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-210634 logs -n 25: (1.464112713s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-841094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:48 UTC │ 19 Nov 25 02:49 UTC │
	│ start   │ -p missing-upgrade-794811 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-794811    │ jenkins │ v1.32.0 │ 19 Nov 25 02:48 UTC │ 19 Nov 25 02:49 UTC │
	│ start   │ -p NoKubernetes-841094 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:49 UTC │
	│ delete  │ -p NoKubernetes-841094                                                                                                                   │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:49 UTC │
	│ start   │ -p NoKubernetes-841094 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:49 UTC │
	│ ssh     │ -p NoKubernetes-841094 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │                     │
	│ stop    │ -p NoKubernetes-841094                                                                                                                   │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:49 UTC │
	│ start   │ -p NoKubernetes-841094 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:49 UTC │
	│ ssh     │ -p NoKubernetes-841094 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │                     │
	│ delete  │ -p NoKubernetes-841094                                                                                                                   │ NoKubernetes-841094       │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:49 UTC │
	│ start   │ -p kubernetes-upgrade-315505 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-315505 │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:50 UTC │
	│ start   │ -p missing-upgrade-794811 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-794811    │ jenkins │ v1.37.0 │ 19 Nov 25 02:49 UTC │ 19 Nov 25 02:50 UTC │
	│ stop    │ -p kubernetes-upgrade-315505                                                                                                             │ kubernetes-upgrade-315505 │ jenkins │ v1.37.0 │ 19 Nov 25 02:50 UTC │ 19 Nov 25 02:50 UTC │
	│ start   │ -p kubernetes-upgrade-315505 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-315505 │ jenkins │ v1.37.0 │ 19 Nov 25 02:50 UTC │                     │
	│ delete  │ -p missing-upgrade-794811                                                                                                                │ missing-upgrade-794811    │ jenkins │ v1.37.0 │ 19 Nov 25 02:50 UTC │ 19 Nov 25 02:50 UTC │
	│ start   │ -p stopped-upgrade-245523 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-245523    │ jenkins │ v1.32.0 │ 19 Nov 25 02:50 UTC │ 19 Nov 25 02:51 UTC │
	│ stop    │ stopped-upgrade-245523 stop                                                                                                              │ stopped-upgrade-245523    │ jenkins │ v1.32.0 │ 19 Nov 25 02:51 UTC │ 19 Nov 25 02:51 UTC │
	│ start   │ -p stopped-upgrade-245523 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-245523    │ jenkins │ v1.37.0 │ 19 Nov 25 02:51 UTC │ 19 Nov 25 02:51 UTC │
	│ delete  │ -p stopped-upgrade-245523                                                                                                                │ stopped-upgrade-245523    │ jenkins │ v1.37.0 │ 19 Nov 25 02:51 UTC │ 19 Nov 25 02:51 UTC │
	│ start   │ -p running-upgrade-422316 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-422316    │ jenkins │ v1.32.0 │ 19 Nov 25 02:51 UTC │ 19 Nov 25 02:52 UTC │
	│ start   │ -p running-upgrade-422316 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-422316    │ jenkins │ v1.37.0 │ 19 Nov 25 02:52 UTC │ 19 Nov 25 02:52 UTC │
	│ delete  │ -p running-upgrade-422316                                                                                                                │ running-upgrade-422316    │ jenkins │ v1.37.0 │ 19 Nov 25 02:52 UTC │ 19 Nov 25 02:52 UTC │
	│ start   │ -p pause-210634 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-210634              │ jenkins │ v1.37.0 │ 19 Nov 25 02:52 UTC │ 19 Nov 25 02:54 UTC │
	│ start   │ -p pause-210634 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-210634              │ jenkins │ v1.37.0 │ 19 Nov 25 02:54 UTC │ 19 Nov 25 02:54 UTC │
	│ pause   │ -p pause-210634 --alsologtostderr -v=5                                                                                                   │ pause-210634              │ jenkins │ v1.37.0 │ 19 Nov 25 02:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:54:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:54:07.516228 1625403 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:54:07.516445 1625403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:54:07.516477 1625403 out.go:374] Setting ErrFile to fd 2...
	I1119 02:54:07.516496 1625403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:54:07.516770 1625403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:54:07.517141 1625403 out.go:368] Setting JSON to false
	I1119 02:54:07.518213 1625403 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38175,"bootTime":1763482673,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 02:54:07.518309 1625403 start.go:143] virtualization:  
	I1119 02:54:07.523607 1625403 out.go:179] * [pause-210634] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 02:54:07.526863 1625403 notify.go:221] Checking for updates...
	I1119 02:54:07.533567 1625403 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:54:07.536694 1625403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:54:07.539792 1625403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:54:07.542874 1625403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 02:54:07.545699 1625403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 02:54:07.548517 1625403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:54:07.551765 1625403 config.go:182] Loaded profile config "pause-210634": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:54:07.552322 1625403 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:54:07.597668 1625403 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 02:54:07.597778 1625403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:54:07.687121 1625403 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 02:54:07.673467353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:54:07.687218 1625403 docker.go:319] overlay module found
	I1119 02:54:07.690221 1625403 out.go:179] * Using the docker driver based on existing profile
	I1119 02:54:07.692969 1625403 start.go:309] selected driver: docker
	I1119 02:54:07.692983 1625403 start.go:930] validating driver "docker" against &{Name:pause-210634 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-210634 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:54:07.693093 1625403 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:54:07.693187 1625403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:54:07.764935 1625403 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 02:54:07.755265794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:54:07.765419 1625403 cni.go:84] Creating CNI manager for ""
	I1119 02:54:07.765483 1625403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:54:07.765533 1625403 start.go:353] cluster config:
	{Name:pause-210634 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-210634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:54:07.770684 1625403 out.go:179] * Starting "pause-210634" primary control-plane node in "pause-210634" cluster
	I1119 02:54:07.773367 1625403 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:54:07.776370 1625403 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:54:07.779137 1625403 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:54:07.779181 1625403 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 02:54:07.779190 1625403 cache.go:65] Caching tarball of preloaded images
	I1119 02:54:07.779280 1625403 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 02:54:07.779290 1625403 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:54:07.779439 1625403 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/config.json ...
	I1119 02:54:07.779645 1625403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:54:07.814084 1625403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:54:07.814103 1625403 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:54:07.814176 1625403 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:54:07.814234 1625403 start.go:360] acquireMachinesLock for pause-210634: {Name:mk19349f7139b87fee1a009db22474497ab35596 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:54:07.814335 1625403 start.go:364] duration metric: took 79.637µs to acquireMachinesLock for "pause-210634"
	I1119 02:54:07.814356 1625403 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:54:07.814409 1625403 fix.go:54] fixHost starting: 
	I1119 02:54:07.814776 1625403 cli_runner.go:164] Run: docker container inspect pause-210634 --format={{.State.Status}}
	I1119 02:54:07.843750 1625403 fix.go:112] recreateIfNeeded on pause-210634: state=Running err=<nil>
	W1119 02:54:07.843776 1625403 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 02:54:03.874342 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:33740->192.168.76.2:8443: read: connection reset by peer
	I1119 02:54:03.874407 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:03.874467 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:03.903151 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:03.903182 1608779 cri.go:89] found id: "689c473fe75d4f2b8d0567a81b8b468fc12add10ee60464c16d0c7fc0b6b067a"
	I1119 02:54:03.903187 1608779 cri.go:89] found id: ""
	I1119 02:54:03.903195 1608779 logs.go:282] 2 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79 689c473fe75d4f2b8d0567a81b8b468fc12add10ee60464c16d0c7fc0b6b067a]
	I1119 02:54:03.903252 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:03.907032 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:03.910460 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:03.910531 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:03.938541 1608779 cri.go:89] found id: ""
	I1119 02:54:03.938565 1608779 logs.go:282] 0 containers: []
	W1119 02:54:03.938573 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:03.938580 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:03.938637 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:03.967843 1608779 cri.go:89] found id: ""
	I1119 02:54:03.967868 1608779 logs.go:282] 0 containers: []
	W1119 02:54:03.967877 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:03.967884 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:03.967938 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:03.995383 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:03.995404 1608779 cri.go:89] found id: ""
	I1119 02:54:03.995412 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:03.995465 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:03.999092 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:03.999168 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:04.028162 1608779 cri.go:89] found id: ""
	I1119 02:54:04.028187 1608779 logs.go:282] 0 containers: []
	W1119 02:54:04.028196 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:04.028202 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:04.028261 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:04.055568 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:04.055591 1608779 cri.go:89] found id: "7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:04.055596 1608779 cri.go:89] found id: ""
	I1119 02:54:04.055604 1608779 logs.go:282] 2 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733]
	I1119 02:54:04.055662 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:04.059561 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:04.063468 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:04.063543 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:04.105131 1608779 cri.go:89] found id: ""
	I1119 02:54:04.105157 1608779 logs.go:282] 0 containers: []
	W1119 02:54:04.105166 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:04.105172 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:04.105237 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:04.155523 1608779 cri.go:89] found id: ""
	I1119 02:54:04.155558 1608779 logs.go:282] 0 containers: []
	W1119 02:54:04.155567 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:04.155580 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:04.155592 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:04.173626 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:04.173655 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:04.206493 1608779 logs.go:123] Gathering logs for kube-apiserver [689c473fe75d4f2b8d0567a81b8b468fc12add10ee60464c16d0c7fc0b6b067a] ...
	I1119 02:54:04.206525 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689c473fe75d4f2b8d0567a81b8b468fc12add10ee60464c16d0c7fc0b6b067a"
	I1119 02:54:04.243783 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:04.243860 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:04.284456 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:04.284536 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:04.355365 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:04.355443 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:04.507494 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:04.507571 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:04.596145 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:04.596162 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:04.596175 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:04.660531 1608779 logs.go:123] Gathering logs for kube-controller-manager [7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733] ...
	I1119 02:54:04.660565 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:04.687064 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:04.687090 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:07.218121 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:07.218602 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:07.218651 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:07.218714 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:07.244522 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:07.244548 1608779 cri.go:89] found id: ""
	I1119 02:54:07.244556 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:07.244610 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:07.248189 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:07.248256 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:07.273778 1608779 cri.go:89] found id: ""
	I1119 02:54:07.273801 1608779 logs.go:282] 0 containers: []
	W1119 02:54:07.273810 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:07.273817 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:07.273875 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:07.300796 1608779 cri.go:89] found id: ""
	I1119 02:54:07.300820 1608779 logs.go:282] 0 containers: []
	W1119 02:54:07.300829 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:07.300837 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:07.300895 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:07.326681 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:07.326703 1608779 cri.go:89] found id: ""
	I1119 02:54:07.326710 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:07.326764 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:07.330577 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:07.330662 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:07.367985 1608779 cri.go:89] found id: ""
	I1119 02:54:07.368019 1608779 logs.go:282] 0 containers: []
	W1119 02:54:07.368029 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:07.368041 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:07.368099 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:07.422627 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:07.422654 1608779 cri.go:89] found id: "7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:07.422659 1608779 cri.go:89] found id: ""
	I1119 02:54:07.422667 1608779 logs.go:282] 2 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733]
	I1119 02:54:07.422722 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:07.426865 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:07.439988 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:07.440089 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:07.479210 1608779 cri.go:89] found id: ""
	I1119 02:54:07.479229 1608779 logs.go:282] 0 containers: []
	W1119 02:54:07.479237 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:07.479244 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:07.479307 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:07.520306 1608779 cri.go:89] found id: ""
	I1119 02:54:07.520324 1608779 logs.go:282] 0 containers: []
	W1119 02:54:07.520331 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:07.520345 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:07.520356 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:07.539571 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:07.539596 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:07.585019 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:07.585297 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:07.697223 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:07.697253 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:07.779709 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:07.779734 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:07.833820 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:07.833849 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:07.975741 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:07.975831 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:08.069802 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:08.069819 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:08.069837 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:08.105675 1608779 logs.go:123] Gathering logs for kube-controller-manager [7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733] ...
	I1119 02:54:08.105701 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:07.847082 1625403 out.go:252] * Updating the running docker "pause-210634" container ...
	I1119 02:54:07.847117 1625403 machine.go:94] provisionDockerMachine start ...
	I1119 02:54:07.847212 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:07.879400 1625403 main.go:143] libmachine: Using SSH client type: native
	I1119 02:54:07.879736 1625403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34870 <nil> <nil>}
	I1119 02:54:07.879747 1625403 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:54:08.048127 1625403 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-210634
	
	I1119 02:54:08.048153 1625403 ubuntu.go:182] provisioning hostname "pause-210634"
	I1119 02:54:08.048217 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:08.066421 1625403 main.go:143] libmachine: Using SSH client type: native
	I1119 02:54:08.066725 1625403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34870 <nil> <nil>}
	I1119 02:54:08.066739 1625403 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-210634 && echo "pause-210634" | sudo tee /etc/hostname
	I1119 02:54:08.237137 1625403 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-210634
	
	I1119 02:54:08.237213 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:08.255593 1625403 main.go:143] libmachine: Using SSH client type: native
	I1119 02:54:08.255915 1625403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34870 <nil> <nil>}
	I1119 02:54:08.255938 1625403 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-210634' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-210634/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-210634' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:54:08.398897 1625403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:54:08.398921 1625403 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 02:54:08.398950 1625403 ubuntu.go:190] setting up certificates
	I1119 02:54:08.398968 1625403 provision.go:84] configureAuth start
	I1119 02:54:08.399031 1625403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-210634
	I1119 02:54:08.417175 1625403 provision.go:143] copyHostCerts
	I1119 02:54:08.417239 1625403 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 02:54:08.417256 1625403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 02:54:08.417330 1625403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 02:54:08.417422 1625403 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 02:54:08.417428 1625403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 02:54:08.417453 1625403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 02:54:08.417503 1625403 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 02:54:08.417607 1625403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 02:54:08.417647 1625403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 02:54:08.417725 1625403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.pause-210634 san=[127.0.0.1 192.168.85.2 localhost minikube pause-210634]
	I1119 02:54:08.933944 1625403 provision.go:177] copyRemoteCerts
	I1119 02:54:08.934034 1625403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:54:08.934091 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:08.951230 1625403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34870 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/pause-210634/id_rsa Username:docker}
	I1119 02:54:09.053283 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:54:09.071812 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 02:54:09.090300 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1119 02:54:09.110589 1625403 provision.go:87] duration metric: took 711.607092ms to configureAuth
	I1119 02:54:09.110614 1625403 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:54:09.110837 1625403 config.go:182] Loaded profile config "pause-210634": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:54:09.110946 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:09.131501 1625403 main.go:143] libmachine: Using SSH client type: native
	I1119 02:54:09.131823 1625403 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34870 <nil> <nil>}
	I1119 02:54:09.131838 1625403 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:54:10.654380 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:10.654834 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:10.654886 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:10.654945 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:10.681415 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:10.681434 1608779 cri.go:89] found id: ""
	I1119 02:54:10.681441 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:10.681500 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:10.685275 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:10.685348 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:10.711752 1608779 cri.go:89] found id: ""
	I1119 02:54:10.711775 1608779 logs.go:282] 0 containers: []
	W1119 02:54:10.711784 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:10.711790 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:10.711847 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:10.737074 1608779 cri.go:89] found id: ""
	I1119 02:54:10.737096 1608779 logs.go:282] 0 containers: []
	W1119 02:54:10.737105 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:10.737111 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:10.737167 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:10.765047 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:10.765071 1608779 cri.go:89] found id: ""
	I1119 02:54:10.765078 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:10.765146 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:10.769104 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:10.769202 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:10.796001 1608779 cri.go:89] found id: ""
	I1119 02:54:10.796028 1608779 logs.go:282] 0 containers: []
	W1119 02:54:10.796038 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:10.796046 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:10.796108 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:10.823066 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:10.823139 1608779 cri.go:89] found id: "7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:10.823165 1608779 cri.go:89] found id: ""
	I1119 02:54:10.823186 1608779 logs.go:282] 2 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733]
	I1119 02:54:10.823258 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:10.826876 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:10.830543 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:10.830613 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:10.860044 1608779 cri.go:89] found id: ""
	I1119 02:54:10.860118 1608779 logs.go:282] 0 containers: []
	W1119 02:54:10.860140 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:10.860158 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:10.860245 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:10.891801 1608779 cri.go:89] found id: ""
	I1119 02:54:10.891869 1608779 logs.go:282] 0 containers: []
	W1119 02:54:10.891886 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:10.891902 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:10.891917 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:10.954950 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:10.954989 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:10.981705 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:10.981743 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:11.041195 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:11.041229 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:11.057694 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:11.057723 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:11.092547 1608779 logs.go:123] Gathering logs for kube-controller-manager [7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733] ...
	I1119 02:54:11.092581 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:11.120330 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:11.120360 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:11.154077 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:11.154104 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:11.273472 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:11.273518 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:11.342011 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:14.558622 1625403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:54:14.558643 1625403 machine.go:97] duration metric: took 6.711518159s to provisionDockerMachine
	I1119 02:54:14.558654 1625403 start.go:293] postStartSetup for "pause-210634" (driver="docker")
	I1119 02:54:14.558664 1625403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:54:14.558742 1625403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:54:14.558781 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:14.580903 1625403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34870 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/pause-210634/id_rsa Username:docker}
	I1119 02:54:14.681302 1625403 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:54:14.684593 1625403 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:54:14.684623 1625403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:54:14.684634 1625403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 02:54:14.684687 1625403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 02:54:14.684771 1625403 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 02:54:14.684879 1625403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:54:14.692783 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:54:14.709891 1625403 start.go:296] duration metric: took 151.221829ms for postStartSetup
	I1119 02:54:14.709968 1625403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:54:14.710006 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:14.727189 1625403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34870 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/pause-210634/id_rsa Username:docker}
	I1119 02:54:14.827425 1625403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:54:14.832624 1625403 fix.go:56] duration metric: took 7.018209344s for fixHost
	I1119 02:54:14.832647 1625403 start.go:83] releasing machines lock for "pause-210634", held for 7.01830192s
	I1119 02:54:14.832720 1625403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-210634
	I1119 02:54:14.851644 1625403 ssh_runner.go:195] Run: cat /version.json
	I1119 02:54:14.851693 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:14.851995 1625403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:54:14.852053 1625403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-210634
	I1119 02:54:14.872590 1625403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34870 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/pause-210634/id_rsa Username:docker}
	I1119 02:54:14.873266 1625403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34870 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/pause-210634/id_rsa Username:docker}
	I1119 02:54:14.973338 1625403 ssh_runner.go:195] Run: systemctl --version
	I1119 02:54:15.075979 1625403 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:54:15.117818 1625403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:54:15.122669 1625403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:54:15.122768 1625403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:54:15.130801 1625403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 02:54:15.130823 1625403 start.go:496] detecting cgroup driver to use...
	I1119 02:54:15.130855 1625403 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 02:54:15.130923 1625403 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:54:15.153127 1625403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:54:15.166786 1625403 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:54:15.166861 1625403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:54:15.183722 1625403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:54:15.197753 1625403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:54:15.336036 1625403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:54:15.469376 1625403 docker.go:234] disabling docker service ...
	I1119 02:54:15.469572 1625403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:54:15.485134 1625403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:54:15.499879 1625403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:54:15.629180 1625403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:54:15.766684 1625403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:54:15.780525 1625403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:54:15.797682 1625403 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:54:15.797797 1625403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.809086 1625403 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 02:54:15.809174 1625403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.824976 1625403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.835656 1625403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.845640 1625403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:54:15.854393 1625403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.863523 1625403 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.871404 1625403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:54:15.879822 1625403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:54:15.887328 1625403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:54:15.895017 1625403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:54:16.026692 1625403 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:54:16.254563 1625403 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:54:16.254680 1625403 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:54:16.259933 1625403 start.go:564] Will wait 60s for crictl version
	I1119 02:54:16.260006 1625403 ssh_runner.go:195] Run: which crictl
	I1119 02:54:16.263589 1625403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:54:16.288074 1625403 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:54:16.288159 1625403 ssh_runner.go:195] Run: crio --version
	I1119 02:54:16.316446 1625403 ssh_runner.go:195] Run: crio --version
	I1119 02:54:16.352599 1625403 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:54:16.355631 1625403 cli_runner.go:164] Run: docker network inspect pause-210634 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:54:16.371651 1625403 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 02:54:16.375541 1625403 kubeadm.go:884] updating cluster {Name:pause-210634 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-210634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:54:16.375701 1625403 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:54:16.375763 1625403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:54:16.406517 1625403 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:54:16.406540 1625403 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:54:16.406604 1625403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:54:16.435469 1625403 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:54:16.435490 1625403 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:54:16.435498 1625403 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 02:54:16.435599 1625403 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-210634 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-210634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:54:16.435684 1625403 ssh_runner.go:195] Run: crio config
	I1119 02:54:16.500449 1625403 cni.go:84] Creating CNI manager for ""
	I1119 02:54:16.500615 1625403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:54:16.500643 1625403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:54:16.500668 1625403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-210634 NodeName:pause-210634 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:54:16.500794 1625403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-210634"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:54:16.500868 1625403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:54:16.509790 1625403 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:54:16.509898 1625403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:54:16.517247 1625403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1119 02:54:16.529988 1625403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:54:16.542798 1625403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 02:54:16.555361 1625403 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:54:16.559554 1625403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:54:16.791730 1625403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:54:16.822182 1625403 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634 for IP: 192.168.85.2
	I1119 02:54:16.822242 1625403 certs.go:195] generating shared ca certs ...
	I1119 02:54:16.822273 1625403 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:54:16.822430 1625403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 02:54:16.822498 1625403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 02:54:16.822520 1625403 certs.go:257] generating profile certs ...
	I1119 02:54:16.822633 1625403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/client.key
	I1119 02:54:16.822722 1625403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/apiserver.key.465ead23
	I1119 02:54:16.822799 1625403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/proxy-client.key
	I1119 02:54:16.822964 1625403 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 02:54:16.823017 1625403 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 02:54:16.823041 1625403 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 02:54:16.823104 1625403 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 02:54:16.823153 1625403 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:54:16.823210 1625403 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 02:54:16.823278 1625403 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:54:16.823931 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:54:16.878147 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:54:16.908149 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:54:16.939152 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:54:16.970998 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 02:54:16.994829 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:54:17.019893 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:54:17.049762 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 02:54:17.076158 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 02:54:17.104170 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 02:54:17.158858 1625403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:54:17.198809 1625403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:54:17.218413 1625403 ssh_runner.go:195] Run: openssl version
	I1119 02:54:17.229171 1625403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 02:54:17.250816 1625403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 02:54:17.254886 1625403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 02:54:17.254997 1625403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 02:54:17.326692 1625403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 02:54:17.338583 1625403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 02:54:17.356129 1625403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 02:54:17.370020 1625403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 02:54:17.370163 1625403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 02:54:17.450031 1625403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:54:17.461732 1625403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:54:17.479373 1625403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:54:17.483904 1625403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:54:17.484024 1625403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:54:17.548056 1625403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:54:17.561925 1625403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:54:17.570363 1625403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 02:54:17.678831 1625403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 02:54:17.763317 1625403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 02:54:17.851718 1625403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 02:54:17.927358 1625403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 02:54:18.005099 1625403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 02:54:18.085775 1625403 kubeadm.go:401] StartCluster: {Name:pause-210634 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-210634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:54:18.085957 1625403 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:54:18.086060 1625403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:54:18.174810 1625403 cri.go:89] found id: "eae9c2747a6c090141bed859a946caeda9db2b858e668856f491efbcc50cc1f0"
	I1119 02:54:18.174882 1625403 cri.go:89] found id: "5b278aa8fa67a48484345b54805bb5c29e9162a5f54af5ee074fef8c5766b072"
	I1119 02:54:18.174901 1625403 cri.go:89] found id: "b48490446c0478d4b524dee6f413a7df871f5a694e492a481e6b826633cf96b5"
	I1119 02:54:18.174937 1625403 cri.go:89] found id: "003c29925c9f74027c0217cc5ade71e94414e340d291dd591f874bc578fbea1e"
	I1119 02:54:18.174960 1625403 cri.go:89] found id: "e150eb077157007e590ae1733965580b7175324548f25a32203bab129b2bd815"
	I1119 02:54:18.174979 1625403 cri.go:89] found id: "92c3ad91064be1f0a314b7990a3febf510451be94e08960b34dcdff3fadc057b"
	I1119 02:54:18.174997 1625403 cri.go:89] found id: "eb8cf828ba50c64f1cda8c35f26f210691c1cb238dd26ab0d751895dff0facef"
	I1119 02:54:18.175015 1625403 cri.go:89] found id: "1ca5597ffbe5c95f3994042247107836a869c47234e06acdc0ead2bc3dded4ac"
	I1119 02:54:18.175041 1625403 cri.go:89] found id: "b89fe3c5e3979493011dde519b29f7ae915a6bd84a62073ff542628e53e0b863"
	I1119 02:54:18.175067 1625403 cri.go:89] found id: "1cb2a0f2c8744125697bf96d494f11f98ffa7f0812d3661e3ae50c530dcb2241"
	I1119 02:54:18.175086 1625403 cri.go:89] found id: "6e6bccbb7a956f12be30b89d73d28b8866fad0012f5638de488e292311f075e7"
	I1119 02:54:18.175103 1625403 cri.go:89] found id: "e3f1e86ddd1d329884483c2ad1df7a1973076d6ec408d93655d53c17a56315e3"
	I1119 02:54:18.175121 1625403 cri.go:89] found id: "da21118c4e7ffd2ca35cc7a4a6cbace43bc77174d343ccb4fbf9ea2f65d04d5e"
	I1119 02:54:18.175150 1625403 cri.go:89] found id: "99c5fdf54f0795b8af2a7e440cbeb21a2991c76dbb380f799c0f0a3f93211efa"
	I1119 02:54:18.175174 1625403 cri.go:89] found id: ""
	I1119 02:54:18.175258 1625403 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 02:54:18.190999 1625403 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:54:18Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:54:18.191139 1625403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:54:18.203524 1625403 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 02:54:18.203589 1625403 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 02:54:18.203676 1625403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 02:54:18.215496 1625403 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 02:54:18.218254 1625403 kubeconfig.go:125] found "pause-210634" server: "https://192.168.85.2:8443"
	I1119 02:54:18.219248 1625403 kapi.go:59] client config for pause-210634: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/client.crt", KeyFile:"/home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/client.key", CAFile:"/home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 02:54:18.219807 1625403 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 02:54:18.219845 1625403 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 02:54:18.219909 1625403 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 02:54:18.219934 1625403 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 02:54:18.219953 1625403 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 02:54:18.220264 1625403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 02:54:18.235515 1625403 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 02:54:18.235587 1625403 kubeadm.go:602] duration metric: took 31.979222ms to restartPrimaryControlPlane
	I1119 02:54:18.235614 1625403 kubeadm.go:403] duration metric: took 149.847853ms to StartCluster
	I1119 02:54:18.235655 1625403 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:54:18.235734 1625403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:54:18.236577 1625403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:54:18.236849 1625403 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:54:18.237250 1625403 config.go:182] Loaded profile config "pause-210634": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:54:18.237223 1625403 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:54:18.242348 1625403 out.go:179] * Enabled addons: 
	I1119 02:54:18.242477 1625403 out.go:179] * Verifying Kubernetes components...
	I1119 02:54:13.842841 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:13.843296 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:13.843354 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:13.843461 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:13.868357 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:13.868379 1608779 cri.go:89] found id: ""
	I1119 02:54:13.868387 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:13.868445 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:13.872299 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:13.872367 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:13.897565 1608779 cri.go:89] found id: ""
	I1119 02:54:13.897637 1608779 logs.go:282] 0 containers: []
	W1119 02:54:13.897672 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:13.897696 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:13.897785 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:13.922997 1608779 cri.go:89] found id: ""
	I1119 02:54:13.923020 1608779 logs.go:282] 0 containers: []
	W1119 02:54:13.923037 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:13.923044 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:13.923102 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:13.950542 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:13.950566 1608779 cri.go:89] found id: ""
	I1119 02:54:13.950574 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:13.950660 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:13.954579 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:13.954680 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:13.981689 1608779 cri.go:89] found id: ""
	I1119 02:54:13.981729 1608779 logs.go:282] 0 containers: []
	W1119 02:54:13.981739 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:13.981746 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:13.981814 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:14.011315 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:14.011340 1608779 cri.go:89] found id: "7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:14.011353 1608779 cri.go:89] found id: ""
	I1119 02:54:14.011360 1608779 logs.go:282] 2 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733]
	I1119 02:54:14.011423 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:14.015611 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:14.019704 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:14.019783 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:14.050245 1608779 cri.go:89] found id: ""
	I1119 02:54:14.050272 1608779 logs.go:282] 0 containers: []
	W1119 02:54:14.050281 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:14.050289 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:14.050381 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:14.078827 1608779 cri.go:89] found id: ""
	I1119 02:54:14.078856 1608779 logs.go:282] 0 containers: []
	W1119 02:54:14.078866 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:14.078879 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:14.078892 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:14.196212 1608779 logs.go:123] Gathering logs for kube-controller-manager [7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733] ...
	I1119 02:54:14.196251 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7d775c67113fa1266ea751d89fce6dee2939bcc84dea7253349321767730c733"
	I1119 02:54:14.225192 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:14.225226 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:14.284353 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:14.284389 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:14.328098 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:14.328126 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:14.348781 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:14.348809 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:14.434833 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:14.434851 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:14.434863 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:14.470708 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:14.470742 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:14.537594 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:14.537632 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:17.077624 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:17.077962 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:17.078013 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:17.078069 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:17.119575 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:17.119595 1608779 cri.go:89] found id: ""
	I1119 02:54:17.119603 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:17.119656 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:17.123311 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:17.123389 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:17.180777 1608779 cri.go:89] found id: ""
	I1119 02:54:17.180805 1608779 logs.go:282] 0 containers: []
	W1119 02:54:17.180813 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:17.180820 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:17.180875 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:17.230373 1608779 cri.go:89] found id: ""
	I1119 02:54:17.230409 1608779 logs.go:282] 0 containers: []
	W1119 02:54:17.230421 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:17.230428 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:17.230486 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:17.298996 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:17.299021 1608779 cri.go:89] found id: ""
	I1119 02:54:17.299029 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:17.299083 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:17.302576 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:17.302650 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:17.355347 1608779 cri.go:89] found id: ""
	I1119 02:54:17.355376 1608779 logs.go:282] 0 containers: []
	W1119 02:54:17.355385 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:17.355391 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:17.355453 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:17.409389 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:17.409413 1608779 cri.go:89] found id: ""
	I1119 02:54:17.409421 1608779 logs.go:282] 1 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5]
	I1119 02:54:17.409477 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:17.413102 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:17.413194 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:17.458918 1608779 cri.go:89] found id: ""
	I1119 02:54:17.458945 1608779 logs.go:282] 0 containers: []
	W1119 02:54:17.458954 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:17.458960 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:17.459020 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:17.503733 1608779 cri.go:89] found id: ""
	I1119 02:54:17.503760 1608779 logs.go:282] 0 containers: []
	W1119 02:54:17.503769 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:17.503778 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:17.503790 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:17.531637 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:17.531671 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:17.656761 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:17.656787 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:17.656800 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:17.714853 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:17.714887 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:17.813725 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:17.813761 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:17.851207 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:17.851239 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:17.931528 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:17.931563 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:17.973107 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:17.973144 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:18.245174 1625403 addons.go:515] duration metric: took 7.940568ms for enable addons: enabled=[]
	I1119 02:54:18.245292 1625403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:54:18.549656 1625403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:54:18.564616 1625403 node_ready.go:35] waiting up to 6m0s for node "pause-210634" to be "Ready" ...
	I1119 02:54:20.623219 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:20.623611 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:20.623672 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:20.623731 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:20.668938 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:20.668963 1608779 cri.go:89] found id: ""
	I1119 02:54:20.668972 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:20.669025 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:20.677924 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:20.678002 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:20.744920 1608779 cri.go:89] found id: ""
	I1119 02:54:20.744947 1608779 logs.go:282] 0 containers: []
	W1119 02:54:20.744956 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:20.744968 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:20.745026 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:20.791188 1608779 cri.go:89] found id: ""
	I1119 02:54:20.791216 1608779 logs.go:282] 0 containers: []
	W1119 02:54:20.791225 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:20.791232 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:20.791295 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:20.825816 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:20.825841 1608779 cri.go:89] found id: ""
	I1119 02:54:20.825850 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:20.825904 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:20.829824 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:20.829901 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:20.859120 1608779 cri.go:89] found id: ""
	I1119 02:54:20.859149 1608779 logs.go:282] 0 containers: []
	W1119 02:54:20.859157 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:20.859163 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:20.859225 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:20.895708 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:20.895734 1608779 cri.go:89] found id: ""
	I1119 02:54:20.895742 1608779 logs.go:282] 1 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5]
	I1119 02:54:20.895797 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:20.899460 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:20.899532 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:20.928813 1608779 cri.go:89] found id: ""
	I1119 02:54:20.928841 1608779 logs.go:282] 0 containers: []
	W1119 02:54:20.928850 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:20.928856 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:20.928913 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:20.967889 1608779 cri.go:89] found id: ""
	I1119 02:54:20.967916 1608779 logs.go:282] 0 containers: []
	W1119 02:54:20.967925 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:20.967933 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:20.967945 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:21.044181 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:21.044219 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:21.123818 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:21.123848 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:21.271050 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:21.271086 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:21.298616 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:21.298646 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:21.424810 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:21.424833 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:21.424853 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:21.476709 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:21.476744 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:21.564650 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:21.564691 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:22.641446 1625403 node_ready.go:49] node "pause-210634" is "Ready"
	I1119 02:54:22.641473 1625403 node_ready.go:38] duration metric: took 4.076786154s for node "pause-210634" to be "Ready" ...
	I1119 02:54:22.641485 1625403 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:54:22.641562 1625403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:54:22.662936 1625403 api_server.go:72] duration metric: took 4.426030907s to wait for apiserver process to appear ...
	I1119 02:54:22.662956 1625403 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:54:22.662975 1625403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:54:22.690703 1625403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:54:22.690781 1625403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:54:23.163964 1625403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:54:23.172070 1625403 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 02:54:23.172100 1625403 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 02:54:23.663423 1625403 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:54:23.671790 1625403 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 02:54:23.672771 1625403 api_server.go:141] control plane version: v1.34.1
	I1119 02:54:23.672793 1625403 api_server.go:131] duration metric: took 1.009829153s to wait for apiserver health ...
	I1119 02:54:23.672803 1625403 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:54:23.676123 1625403 system_pods.go:59] 7 kube-system pods found
	I1119 02:54:23.676161 1625403 system_pods.go:61] "coredns-66bc5c9577-p4snv" [35d307ff-e63a-486d-9eb8-95e7cf67119f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:54:23.676170 1625403 system_pods.go:61] "etcd-pause-210634" [c521dfe2-7cf4-4b2a-9b3d-91446fe702cb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:54:23.676175 1625403 system_pods.go:61] "kindnet-w68ds" [6af1936b-8342-4b94-8c66-84cea32746ff] Running
	I1119 02:54:23.676183 1625403 system_pods.go:61] "kube-apiserver-pause-210634" [46803f27-17b1-4f8f-8e3c-4af2a69d6004] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:54:23.676192 1625403 system_pods.go:61] "kube-controller-manager-pause-210634" [d4c9f203-70b9-4b92-a92b-f36b52e83543] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:54:23.676197 1625403 system_pods.go:61] "kube-proxy-r7bhh" [d06bc070-5f4f-4e5d-9268-f0bbefdd7fdb] Running
	I1119 02:54:23.676207 1625403 system_pods.go:61] "kube-scheduler-pause-210634" [3cae9114-a64c-4a8d-98c1-3fe8dc773023] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:54:23.676213 1625403 system_pods.go:74] duration metric: took 3.403645ms to wait for pod list to return data ...
	I1119 02:54:23.676222 1625403 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:54:23.678867 1625403 default_sa.go:45] found service account: "default"
	I1119 02:54:23.678892 1625403 default_sa.go:55] duration metric: took 2.659838ms for default service account to be created ...
	I1119 02:54:23.678902 1625403 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:54:23.681758 1625403 system_pods.go:86] 7 kube-system pods found
	I1119 02:54:23.681788 1625403 system_pods.go:89] "coredns-66bc5c9577-p4snv" [35d307ff-e63a-486d-9eb8-95e7cf67119f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:54:23.681798 1625403 system_pods.go:89] "etcd-pause-210634" [c521dfe2-7cf4-4b2a-9b3d-91446fe702cb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 02:54:23.681805 1625403 system_pods.go:89] "kindnet-w68ds" [6af1936b-8342-4b94-8c66-84cea32746ff] Running
	I1119 02:54:23.681811 1625403 system_pods.go:89] "kube-apiserver-pause-210634" [46803f27-17b1-4f8f-8e3c-4af2a69d6004] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 02:54:23.681818 1625403 system_pods.go:89] "kube-controller-manager-pause-210634" [d4c9f203-70b9-4b92-a92b-f36b52e83543] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 02:54:23.681843 1625403 system_pods.go:89] "kube-proxy-r7bhh" [d06bc070-5f4f-4e5d-9268-f0bbefdd7fdb] Running
	I1119 02:54:23.681852 1625403 system_pods.go:89] "kube-scheduler-pause-210634" [3cae9114-a64c-4a8d-98c1-3fe8dc773023] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 02:54:23.681861 1625403 system_pods.go:126] duration metric: took 2.951597ms to wait for k8s-apps to be running ...
	I1119 02:54:23.681869 1625403 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:54:23.681929 1625403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:54:23.697344 1625403 system_svc.go:56] duration metric: took 15.463374ms WaitForService to wait for kubelet
	I1119 02:54:23.697421 1625403 kubeadm.go:587] duration metric: took 5.4605215s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:54:23.697454 1625403 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:54:23.699981 1625403 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 02:54:23.700015 1625403 node_conditions.go:123] node cpu capacity is 2
	I1119 02:54:23.700028 1625403 node_conditions.go:105] duration metric: took 2.553356ms to run NodePressure ...
	I1119 02:54:23.700040 1625403 start.go:242] waiting for startup goroutines ...
	I1119 02:54:23.700076 1625403 start.go:247] waiting for cluster config update ...
	I1119 02:54:23.700090 1625403 start.go:256] writing updated cluster config ...
	I1119 02:54:23.700403 1625403 ssh_runner.go:195] Run: rm -f paused
	I1119 02:54:23.703908 1625403 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:54:23.704565 1625403 kapi.go:59] client config for pause-210634: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/client.crt", KeyFile:"/home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/pause-210634/client.key", CAFile:"/home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2127810), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 02:54:23.707673 1625403 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p4snv" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:54:25.714007 1625403 pod_ready.go:104] pod "coredns-66bc5c9577-p4snv" is not "Ready", error: <nil>
	I1119 02:54:24.107928 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:24.108400 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:24.108466 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:24.108550 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:24.144082 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:24.144105 1608779 cri.go:89] found id: ""
	I1119 02:54:24.144114 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:24.144167 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:24.148124 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:24.148192 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:24.180967 1608779 cri.go:89] found id: ""
	I1119 02:54:24.181039 1608779 logs.go:282] 0 containers: []
	W1119 02:54:24.181062 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:24.181085 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:24.181172 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:24.211093 1608779 cri.go:89] found id: ""
	I1119 02:54:24.211166 1608779 logs.go:282] 0 containers: []
	W1119 02:54:24.211189 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:24.211211 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:24.211297 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:24.243910 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:24.243982 1608779 cri.go:89] found id: ""
	I1119 02:54:24.244004 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:24.244093 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:24.248368 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:24.248492 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:24.277165 1608779 cri.go:89] found id: ""
	I1119 02:54:24.277237 1608779 logs.go:282] 0 containers: []
	W1119 02:54:24.277259 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:24.277282 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:24.277370 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:24.309612 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:24.309633 1608779 cri.go:89] found id: ""
	I1119 02:54:24.309642 1608779 logs.go:282] 1 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5]
	I1119 02:54:24.309699 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:24.313350 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:24.313449 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:24.340551 1608779 cri.go:89] found id: ""
	I1119 02:54:24.340575 1608779 logs.go:282] 0 containers: []
	W1119 02:54:24.340584 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:24.340591 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:24.340651 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:24.368490 1608779 cri.go:89] found id: ""
	I1119 02:54:24.368516 1608779 logs.go:282] 0 containers: []
	W1119 02:54:24.368525 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:24.368533 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:24.368545 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:24.403191 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:24.403218 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:24.467457 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:24.467493 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:24.501649 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:24.501727 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:24.624310 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:24.624409 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:24.654517 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:24.654543 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:24.761016 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:24.761086 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:24.761114 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:24.803855 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:24.803886 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:27.395020 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:27.395391 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:27.395430 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:27.395482 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:27.427782 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:27.427805 1608779 cri.go:89] found id: ""
	I1119 02:54:27.427814 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:27.427872 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:27.431761 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:27.431831 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:27.461796 1608779 cri.go:89] found id: ""
	I1119 02:54:27.461819 1608779 logs.go:282] 0 containers: []
	W1119 02:54:27.461827 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:27.461834 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:27.461894 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:27.488706 1608779 cri.go:89] found id: ""
	I1119 02:54:27.488730 1608779 logs.go:282] 0 containers: []
	W1119 02:54:27.488739 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:27.488746 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:27.488809 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:27.519578 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:27.519599 1608779 cri.go:89] found id: ""
	I1119 02:54:27.519607 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:27.519662 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:27.523500 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:27.523575 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:27.550324 1608779 cri.go:89] found id: ""
	I1119 02:54:27.550348 1608779 logs.go:282] 0 containers: []
	W1119 02:54:27.550357 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:27.550363 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:27.550434 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:27.576740 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:27.576761 1608779 cri.go:89] found id: ""
	I1119 02:54:27.576769 1608779 logs.go:282] 1 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5]
	I1119 02:54:27.576825 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:27.580494 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:27.580568 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:27.607017 1608779 cri.go:89] found id: ""
	I1119 02:54:27.607046 1608779 logs.go:282] 0 containers: []
	W1119 02:54:27.607054 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:27.607061 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:27.607119 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:27.633257 1608779 cri.go:89] found id: ""
	I1119 02:54:27.633279 1608779 logs.go:282] 0 containers: []
	W1119 02:54:27.633288 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:27.633297 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:27.633309 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:27.665454 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:27.665486 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:27.727524 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:27.727562 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:27.755805 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:27.755833 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:27.817749 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:27.817787 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:27.847474 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:27.847499 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:27.971954 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:27.971993 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:27.988612 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:27.988641 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:28.067650 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1119 02:54:28.213471 1625403 pod_ready.go:104] pod "coredns-66bc5c9577-p4snv" is not "Ready", error: <nil>
	I1119 02:54:28.713244 1625403 pod_ready.go:94] pod "coredns-66bc5c9577-p4snv" is "Ready"
	I1119 02:54:28.713276 1625403 pod_ready.go:86] duration metric: took 5.005575219s for pod "coredns-66bc5c9577-p4snv" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:28.715594 1625403 pod_ready.go:83] waiting for pod "etcd-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:28.719625 1625403 pod_ready.go:94] pod "etcd-pause-210634" is "Ready"
	I1119 02:54:28.719651 1625403 pod_ready.go:86] duration metric: took 4.030244ms for pod "etcd-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:28.721840 1625403 pod_ready.go:83] waiting for pod "kube-apiserver-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:29.227893 1625403 pod_ready.go:94] pod "kube-apiserver-pause-210634" is "Ready"
	I1119 02:54:29.227922 1625403 pod_ready.go:86] duration metric: took 506.062135ms for pod "kube-apiserver-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:29.230199 1625403 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 02:54:31.236761 1625403 pod_ready.go:104] pod "kube-controller-manager-pause-210634" is not "Ready", error: <nil>
	I1119 02:54:30.567815 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:30.568261 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:30.568307 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:30.568361 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:30.597872 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:30.597894 1608779 cri.go:89] found id: ""
	I1119 02:54:30.597902 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:30.597961 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:30.601540 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:30.601613 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:30.628537 1608779 cri.go:89] found id: ""
	I1119 02:54:30.628560 1608779 logs.go:282] 0 containers: []
	W1119 02:54:30.628569 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:30.628575 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:30.628682 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:30.654029 1608779 cri.go:89] found id: ""
	I1119 02:54:30.654058 1608779 logs.go:282] 0 containers: []
	W1119 02:54:30.654068 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:30.654074 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:30.654153 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:30.686790 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:30.686815 1608779 cri.go:89] found id: ""
	I1119 02:54:30.686823 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:30.686879 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:30.690711 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:30.690789 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:30.718829 1608779 cri.go:89] found id: ""
	I1119 02:54:30.718858 1608779 logs.go:282] 0 containers: []
	W1119 02:54:30.718866 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:30.718872 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:30.718947 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:30.753424 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:30.753447 1608779 cri.go:89] found id: ""
	I1119 02:54:30.753456 1608779 logs.go:282] 1 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5]
	I1119 02:54:30.753533 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:30.757101 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:30.757169 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:30.787760 1608779 cri.go:89] found id: ""
	I1119 02:54:30.787786 1608779 logs.go:282] 0 containers: []
	W1119 02:54:30.787795 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:30.787802 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:30.787862 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:30.815283 1608779 cri.go:89] found id: ""
	I1119 02:54:30.815305 1608779 logs.go:282] 0 containers: []
	W1119 02:54:30.815314 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:30.815323 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:30.815335 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:30.881640 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:30.881675 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:30.912471 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:30.912496 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:30.974504 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:30.974548 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:31.014243 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:31.014272 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:31.131704 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:31.131740 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:31.150817 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:31.150846 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:31.215502 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:31.215520 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:31.215533 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:32.735553 1625403 pod_ready.go:94] pod "kube-controller-manager-pause-210634" is "Ready"
	I1119 02:54:32.735581 1625403 pod_ready.go:86] duration metric: took 3.505358631s for pod "kube-controller-manager-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:32.737644 1625403 pod_ready.go:83] waiting for pod "kube-proxy-r7bhh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:32.741444 1625403 pod_ready.go:94] pod "kube-proxy-r7bhh" is "Ready"
	I1119 02:54:32.741469 1625403 pod_ready.go:86] duration metric: took 3.804101ms for pod "kube-proxy-r7bhh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:32.911515 1625403 pod_ready.go:83] waiting for pod "kube-scheduler-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:33.715489 1625403 pod_ready.go:94] pod "kube-scheduler-pause-210634" is "Ready"
	I1119 02:54:33.715515 1625403 pod_ready.go:86] duration metric: took 803.971383ms for pod "kube-scheduler-pause-210634" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:54:33.715528 1625403 pod_ready.go:40] duration metric: took 10.011589172s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:54:33.771353 1625403 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 02:54:33.774597 1625403 out.go:179] * Done! kubectl is now configured to use "pause-210634" cluster and "default" namespace by default
	I1119 02:54:33.756847 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:33.757220 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:33.757270 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:33.757340 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:33.792307 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:33.792329 1608779 cri.go:89] found id: ""
	I1119 02:54:33.792338 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:33.792390 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:33.797547 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:33.797614 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:33.865502 1608779 cri.go:89] found id: ""
	I1119 02:54:33.865541 1608779 logs.go:282] 0 containers: []
	W1119 02:54:33.865550 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:33.865556 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:33.865615 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:33.904661 1608779 cri.go:89] found id: ""
	I1119 02:54:33.904685 1608779 logs.go:282] 0 containers: []
	W1119 02:54:33.904693 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:33.904700 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:33.904752 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:33.940864 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:33.940888 1608779 cri.go:89] found id: ""
	I1119 02:54:33.940897 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:33.940955 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:33.945155 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:33.945231 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:33.993324 1608779 cri.go:89] found id: ""
	I1119 02:54:33.993345 1608779 logs.go:282] 0 containers: []
	W1119 02:54:33.993354 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:33.993361 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:33.993418 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:34.038576 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:34.038595 1608779 cri.go:89] found id: ""
	I1119 02:54:34.038603 1608779 logs.go:282] 1 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5]
	I1119 02:54:34.038660 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:34.045463 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:34.045547 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:34.080096 1608779 cri.go:89] found id: ""
	I1119 02:54:34.080118 1608779 logs.go:282] 0 containers: []
	W1119 02:54:34.080127 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:34.080148 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:34.080207 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:34.111941 1608779 cri.go:89] found id: ""
	I1119 02:54:34.111963 1608779 logs.go:282] 0 containers: []
	W1119 02:54:34.111972 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:34.111980 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:34.111995 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:34.158850 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:34.158927 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:34.245619 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:34.245698 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:34.275260 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:34.275330 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:34.348373 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:34.348460 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:34.396960 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:34.397058 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:34.534672 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:34.534759 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:34.556056 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:34.556085 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:34.630895 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:37.132434 1608779 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 02:54:37.132847 1608779 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1119 02:54:37.132891 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 02:54:37.132946 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 02:54:37.163217 1608779 cri.go:89] found id: "b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:37.163237 1608779 cri.go:89] found id: ""
	I1119 02:54:37.163246 1608779 logs.go:282] 1 containers: [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79]
	I1119 02:54:37.163302 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:37.169042 1608779 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 02:54:37.169110 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 02:54:37.208733 1608779 cri.go:89] found id: ""
	I1119 02:54:37.208758 1608779 logs.go:282] 0 containers: []
	W1119 02:54:37.208767 1608779 logs.go:284] No container was found matching "etcd"
	I1119 02:54:37.208774 1608779 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 02:54:37.208850 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 02:54:37.240828 1608779 cri.go:89] found id: ""
	I1119 02:54:37.240855 1608779 logs.go:282] 0 containers: []
	W1119 02:54:37.240864 1608779 logs.go:284] No container was found matching "coredns"
	I1119 02:54:37.240871 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 02:54:37.240934 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 02:54:37.275219 1608779 cri.go:89] found id: "66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:37.275250 1608779 cri.go:89] found id: ""
	I1119 02:54:37.275258 1608779 logs.go:282] 1 containers: [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d]
	I1119 02:54:37.275313 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:37.279336 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 02:54:37.279409 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 02:54:37.315411 1608779 cri.go:89] found id: ""
	I1119 02:54:37.315438 1608779 logs.go:282] 0 containers: []
	W1119 02:54:37.315448 1608779 logs.go:284] No container was found matching "kube-proxy"
	I1119 02:54:37.315455 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 02:54:37.315514 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 02:54:37.356039 1608779 cri.go:89] found id: "b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	I1119 02:54:37.356062 1608779 cri.go:89] found id: ""
	I1119 02:54:37.356071 1608779 logs.go:282] 1 containers: [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5]
	I1119 02:54:37.356129 1608779 ssh_runner.go:195] Run: which crictl
	I1119 02:54:37.360013 1608779 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 02:54:37.360084 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 02:54:37.391653 1608779 cri.go:89] found id: ""
	I1119 02:54:37.391676 1608779 logs.go:282] 0 containers: []
	W1119 02:54:37.391685 1608779 logs.go:284] No container was found matching "kindnet"
	I1119 02:54:37.391692 1608779 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 02:54:37.391748 1608779 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 02:54:37.427471 1608779 cri.go:89] found id: ""
	I1119 02:54:37.427495 1608779 logs.go:282] 0 containers: []
	W1119 02:54:37.427505 1608779 logs.go:284] No container was found matching "storage-provisioner"
	I1119 02:54:37.427513 1608779 logs.go:123] Gathering logs for CRI-O ...
	I1119 02:54:37.427522 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 02:54:37.504329 1608779 logs.go:123] Gathering logs for container status ...
	I1119 02:54:37.504366 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 02:54:37.546152 1608779 logs.go:123] Gathering logs for kubelet ...
	I1119 02:54:37.546183 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 02:54:37.693594 1608779 logs.go:123] Gathering logs for dmesg ...
	I1119 02:54:37.693633 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 02:54:37.718108 1608779 logs.go:123] Gathering logs for describe nodes ...
	I1119 02:54:37.718137 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 02:54:37.843087 1608779 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 02:54:37.843109 1608779 logs.go:123] Gathering logs for kube-apiserver [b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79] ...
	I1119 02:54:37.843124 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b7cd8cb563c444fc3e8b2f5a65af9f99e0a16e61affc3a354a308582b4899e79"
	I1119 02:54:37.892544 1608779 logs.go:123] Gathering logs for kube-scheduler [66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d] ...
	I1119 02:54:37.892578 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 66d47217a8e90ebe2f0ae3a5a07bfee4ccb7f856fca793dec477ab23b15cc03d"
	I1119 02:54:37.973066 1608779 logs.go:123] Gathering logs for kube-controller-manager [b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5] ...
	I1119 02:54:37.973100 1608779 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b82017d2142c6347c55605889407441b8e982da03ec51f193858685925fb47c5"
	
	
	==> CRI-O <==
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.766665723Z" level=info msg="Started container" PID=2212 containerID=92c3ad91064be1f0a314b7990a3febf510451be94e08960b34dcdff3fadc057b description=kube-system/kindnet-w68ds/kindnet-cni id=6387e877-3242-4635-aef7-6620a1c8c3ee name=/runtime.v1.RuntimeService/StartContainer sandboxID=16a6f497be353e42e0a9e3822edab92b277cae5ee3de753f7104bbc9b5d04a25
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.772034646Z" level=info msg="Started container" PID=2213 containerID=e150eb077157007e590ae1733965580b7175324548f25a32203bab129b2bd815 description=kube-system/kube-proxy-r7bhh/kube-proxy id=93aae6c5-8138-4b82-b190-76fdc78b0d42 name=/runtime.v1.RuntimeService/StartContainer sandboxID=77e448635c6a0676d11bffb2e4595fcbf2668d1f5d96636f3e311e4aea44929e
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.778917791Z" level=info msg="Started container" PID=2232 containerID=5b278aa8fa67a48484345b54805bb5c29e9162a5f54af5ee074fef8c5766b072 description=kube-system/kube-scheduler-pause-210634/kube-scheduler id=611ae272-8797-4249-890e-afa9735011e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=783c70939f2a35787b02debaceb8d0f23df00b1f31624f27110ee5917dd6dc85
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.812958058Z" level=info msg="Created container b48490446c0478d4b524dee6f413a7df871f5a694e492a481e6b826633cf96b5: kube-system/coredns-66bc5c9577-p4snv/coredns" id=2044a8ff-98b3-4977-877d-e9a568fb5333 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.818696456Z" level=info msg="Started container" PID=2225 containerID=003c29925c9f74027c0217cc5ade71e94414e340d291dd591f874bc578fbea1e description=kube-system/kube-controller-manager-pause-210634/kube-controller-manager id=88c0b744-4886-4d8f-90c0-0fb31d7b16bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b99f0db290c54c8134fe204bc86fa20ce1f5440f1d8d92051b9aa036d497ff3
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.839247029Z" level=info msg="Starting container: b48490446c0478d4b524dee6f413a7df871f5a694e492a481e6b826633cf96b5" id=9789803c-2f2b-4cee-8e30-1dbde1ce7ca9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.850113003Z" level=info msg="Started container" PID=2248 containerID=b48490446c0478d4b524dee6f413a7df871f5a694e492a481e6b826633cf96b5 description=kube-system/coredns-66bc5c9577-p4snv/coredns id=9789803c-2f2b-4cee-8e30-1dbde1ce7ca9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=708923b24f536ea58db271fa3cbd6db0c549da26783bb4063eae647c13d139d3
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.85676486Z" level=info msg="Created container eae9c2747a6c090141bed859a946caeda9db2b858e668856f491efbcc50cc1f0: kube-system/kube-apiserver-pause-210634/kube-apiserver" id=76dc1f4d-0d24-4884-8d2f-651382d51a88 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.860434803Z" level=info msg="Starting container: eae9c2747a6c090141bed859a946caeda9db2b858e668856f491efbcc50cc1f0" id=6777420d-7a11-42b7-908f-b7dfe9a2d366 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:54:16 pause-210634 crio[2075]: time="2025-11-19T02:54:16.862351754Z" level=info msg="Started container" PID=2249 containerID=eae9c2747a6c090141bed859a946caeda9db2b858e668856f491efbcc50cc1f0 description=kube-system/kube-apiserver-pause-210634/kube-apiserver id=6777420d-7a11-42b7-908f-b7dfe9a2d366 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e654c8c2052deacd645dbf93335aa0a3412e8cfc857e0999d788742442c77f42
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.096065111Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.099402042Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.099437225Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.099459378Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.102365273Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.102403221Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.102426252Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.105487367Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.105654263Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.105689421Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.108418419Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.108446643Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.108469445Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.111332363Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:54:27 pause-210634 crio[2075]: time="2025-11-19T02:54:27.111393186Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	eae9c2747a6c0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago       Running             kube-apiserver            1                   e654c8c2052de       kube-apiserver-pause-210634            kube-system
	5b278aa8fa67a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   783c70939f2a3       kube-scheduler-pause-210634            kube-system
	b48490446c047       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   708923b24f536       coredns-66bc5c9577-p4snv               kube-system
	003c29925c9f7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago       Running             kube-controller-manager   1                   1b99f0db290c5       kube-controller-manager-pause-210634   kube-system
	e150eb0771570       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   77e448635c6a0       kube-proxy-r7bhh                       kube-system
	92c3ad91064be       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   16a6f497be353       kindnet-w68ds                          kube-system
	eb8cf828ba50c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago       Running             etcd                      1                   228c0822f24bb       etcd-pause-210634                      kube-system
	1ca5597ffbe5c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   708923b24f536       coredns-66bc5c9577-p4snv               kube-system
	b89fe3c5e3979       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   77e448635c6a0       kube-proxy-r7bhh                       kube-system
	1cb2a0f2c8744       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   16a6f497be353       kindnet-w68ds                          kube-system
	6e6bccbb7a956       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   1b99f0db290c5       kube-controller-manager-pause-210634   kube-system
	e3f1e86ddd1d3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   783c70939f2a3       kube-scheduler-pause-210634            kube-system
	da21118c4e7ff       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   228c0822f24bb       etcd-pause-210634                      kube-system
	99c5fdf54f079       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   e654c8c2052de       kube-apiserver-pause-210634            kube-system
	
	
	==> coredns [1ca5597ffbe5c95f3994042247107836a869c47234e06acdc0ead2bc3dded4ac] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38751 - 18190 "HINFO IN 2228662234522525700.1617383602449744944. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003946604s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b48490446c0478d4b524dee6f413a7df871f5a694e492a481e6b826633cf96b5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40405 - 35084 "HINFO IN 2104345973068509918.4265233509804500088. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003799409s
	
	
	==> describe nodes <==
	Name:               pause-210634
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-210634
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=pause-210634
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_53_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:53:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-210634
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:54:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:54:20 +0000   Wed, 19 Nov 2025 02:53:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:54:20 +0000   Wed, 19 Nov 2025 02:53:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:54:20 +0000   Wed, 19 Nov 2025 02:53:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:54:20 +0000   Wed, 19 Nov 2025 02:54:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-210634
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                282827d6-0439-4be2-8cf8-d4d9944eb954
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-p4snv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-210634                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         81s
	  kube-system                 kindnet-w68ds                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-210634             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-210634    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-r7bhh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-210634             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 74s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientPID     89s (x8 over 89s)  kubelet          Node pause-210634 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 89s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  89s (x8 over 89s)  kubelet          Node pause-210634 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    89s (x8 over 89s)  kubelet          Node pause-210634 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 89s                kubelet          Starting kubelet.
	  Normal   Starting                 81s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s                kubelet          Node pause-210634 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s                kubelet          Node pause-210634 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s                kubelet          Node pause-210634 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                node-controller  Node pause-210634 event: Registered Node pause-210634 in Controller
	  Normal   NodeReady                35s                kubelet          Node pause-210634 status is now: NodeReady
	  Normal   RegisteredNode           15s                node-controller  Node pause-210634 event: Registered Node pause-210634 in Controller
	
	
	==> dmesg <==
	[Nov19 02:25] overlayfs: idmapped layers are currently not supported
	[ +42.421073] overlayfs: idmapped layers are currently not supported
	[Nov19 02:27] overlayfs: idmapped layers are currently not supported
	[  +3.136079] overlayfs: idmapped layers are currently not supported
	[ +45.971049] overlayfs: idmapped layers are currently not supported
	[Nov19 02:28] overlayfs: idmapped layers are currently not supported
	[Nov19 02:30] overlayfs: idmapped layers are currently not supported
	[Nov19 02:35] overlayfs: idmapped layers are currently not supported
	[ +37.747558] overlayfs: idmapped layers are currently not supported
	[Nov19 02:37] overlayfs: idmapped layers are currently not supported
	[Nov19 02:38] overlayfs: idmapped layers are currently not supported
	[Nov19 02:39] overlayfs: idmapped layers are currently not supported
	[Nov19 02:41] overlayfs: idmapped layers are currently not supported
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [da21118c4e7ffd2ca35cc7a4a6cbace43bc77174d343ccb4fbf9ea2f65d04d5e] <==
	{"level":"warn","ts":"2025-11-19T02:53:14.461037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:53:14.476699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:53:14.490701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:53:14.517224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:53:14.534171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:53:14.546929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:53:14.611057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41792","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T02:54:09.301606Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-19T02:54:09.301685Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-210634","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-19T02:54:09.301824Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T02:54:09.580335Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T02:54:09.580408Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T02:54:09.580430Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-19T02:54:09.580484Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-19T02:54:09.580546Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-19T02:54:09.580640Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T02:54:09.580683Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-19T02:54:09.580642Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-19T02:54:09.580707Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T02:54:09.580716Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T02:54:09.580590Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-19T02:54:09.583937Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-19T02:54:09.584019Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T02:54:09.584052Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-19T02:54:09.584061Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-210634","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [eb8cf828ba50c64f1cda8c35f26f210691c1cb238dd26ab0d751895dff0facef] <==
	{"level":"warn","ts":"2025-11-19T02:54:20.788336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:20.873709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:20.926607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:20.966380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.027213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.101290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.150436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.184847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.199365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.220493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.267073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.309906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.422379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.440464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.458828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.488246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.519781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.552841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.584395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.611606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.628411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.668849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.682219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.698072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T02:54:21.781181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33634","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:54:39 up 10:36,  0 user,  load average: 2.57, 2.57, 2.14
	Linux pause-210634 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cb2a0f2c8744125697bf96d494f11f98ffa7f0812d3661e3ae50c530dcb2241] <==
	I1119 02:53:23.784691       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:53:23.784964       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 02:53:23.785100       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:53:23.785112       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:53:23.785126       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:53:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:53:23.944696       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:53:23.944777       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:53:23.944827       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:53:23.945305       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 02:53:53.945671       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 02:53:53.945719       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 02:53:53.950183       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 02:53:53.950331       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 02:53:55.545463       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:53:55.545495       1 metrics.go:72] Registering metrics
	I1119 02:53:55.545589       1 controller.go:711] "Syncing nftables rules"
	I1119 02:54:03.949593       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:54:03.949642       1 main.go:301] handling current node
	
	
	==> kindnet [92c3ad91064be1f0a314b7990a3febf510451be94e08960b34dcdff3fadc057b] <==
	I1119 02:54:16.832094       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:54:16.833008       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 02:54:16.833250       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:54:16.833320       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:54:16.833359       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:54:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:54:17.095225       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:54:17.095245       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:54:17.095268       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1119 02:54:17.111707       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 02:54:17.111801       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1119 02:54:17.111864       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 02:54:17.111925       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1119 02:54:17.112021       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:54:22.695753       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:54:22.695790       1 metrics.go:72] Registering metrics
	I1119 02:54:22.695851       1 controller.go:711] "Syncing nftables rules"
	I1119 02:54:27.095715       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:54:27.095771       1 main.go:301] handling current node
	I1119 02:54:37.097587       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:54:37.097724       1 main.go:301] handling current node
	
	
	==> kube-apiserver [99c5fdf54f0795b8af2a7e440cbeb21a2991c76dbb380f799c0f0a3f93211efa] <==
	W1119 02:54:09.314761       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.314814       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.314861       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.314912       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.314976       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.315028       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.315083       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.316306       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.316359       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.316395       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.316432       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319052       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319121       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319171       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319327       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319376       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319443       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319490       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319879       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319926       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.319966       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.320222       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.320268       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1119 02:54:09.320451       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [eae9c2747a6c090141bed859a946caeda9db2b858e668856f491efbcc50cc1f0] <==
	I1119 02:54:22.648325       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 02:54:22.648333       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:54:22.653705       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 02:54:22.666122       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 02:54:22.666233       1 policy_source.go:240] refreshing policies
	I1119 02:54:22.715474       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 02:54:22.720841       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 02:54:22.720938       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 02:54:22.721601       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 02:54:22.727278       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 02:54:22.727707       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 02:54:22.727956       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 02:54:22.727994       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 02:54:22.728318       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 02:54:22.730116       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 02:54:22.730241       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 02:54:22.732917       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:54:22.752983       1 cache.go:39] Caches are synced for autoregister controller
	E1119 02:54:22.756044       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 02:54:23.326598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:54:23.586766       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 02:54:25.004404       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:54:25.053139       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 02:54:25.203543       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 02:54:25.353750       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [003c29925c9f74027c0217cc5ade71e94414e340d291dd591f874bc578fbea1e] <==
	I1119 02:54:24.986766       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 02:54:24.994881       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 02:54:24.995090       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 02:54:24.996153       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 02:54:24.996207       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 02:54:24.996251       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 02:54:24.996277       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 02:54:24.996304       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 02:54:24.996336       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 02:54:24.996398       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 02:54:24.996469       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-210634"
	I1119 02:54:24.996505       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 02:54:24.996544       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 02:54:24.996708       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 02:54:24.997332       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 02:54:24.999059       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 02:54:24.999184       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:54:25.004236       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 02:54:25.004614       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:54:25.004690       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:54:25.004722       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:54:25.004754       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:54:25.023164       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:54:25.030561       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 02:54:25.033944       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [6e6bccbb7a956f12be30b89d73d28b8866fad0012f5638de488e292311f075e7] <==
	I1119 02:53:22.291501       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 02:53:22.293775       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 02:53:22.293828       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 02:53:22.293855       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 02:53:22.293869       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 02:53:22.293875       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 02:53:22.301965       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 02:53:22.302420       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-210634" podCIDRs=["10.244.0.0/24"]
	I1119 02:53:22.303673       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 02:53:22.312273       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 02:53:22.313568       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 02:53:22.322911       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:53:22.322961       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 02:53:22.331430       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 02:53:22.331560       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 02:53:22.333231       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 02:53:22.333259       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 02:53:22.333284       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 02:53:22.333334       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 02:53:22.333359       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 02:53:22.333406       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 02:53:22.333573       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 02:53:22.334381       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 02:53:22.336147       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 02:54:07.242415       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b89fe3c5e3979493011dde519b29f7ae915a6bd84a62073ff542628e53e0b863] <==
	I1119 02:53:25.102212       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:53:25.195016       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:53:25.295119       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:53:25.295155       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 02:53:25.295241       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:53:25.312879       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:53:25.312927       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:53:25.316437       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:53:25.316773       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:53:25.316848       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:53:25.320042       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:53:25.320124       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:53:25.320449       1 config.go:200] "Starting service config controller"
	I1119 02:53:25.320492       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:53:25.320788       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:53:25.325719       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:53:25.326301       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:53:25.321086       1 config.go:309] "Starting node config controller"
	I1119 02:53:25.326323       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:53:25.326328       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:53:25.421189       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:53:25.421280       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [e150eb077157007e590ae1733965580b7175324548f25a32203bab129b2bd815] <==
	I1119 02:54:18.216586       1 server_linux.go:53] "Using iptables proxy"
	I1119 02:54:19.908039       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 02:54:22.763805       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 02:54:22.763912       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 02:54:22.764037       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 02:54:22.793239       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:54:22.793304       1 server_linux.go:132] "Using iptables Proxier"
	I1119 02:54:22.811993       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 02:54:22.812372       1 server.go:527] "Version info" version="v1.34.1"
	I1119 02:54:22.812603       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:54:22.814022       1 config.go:200] "Starting service config controller"
	I1119 02:54:22.814089       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 02:54:22.814108       1 config.go:106] "Starting endpoint slice config controller"
	I1119 02:54:22.814114       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 02:54:22.814140       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 02:54:22.814156       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 02:54:22.817796       1 config.go:309] "Starting node config controller"
	I1119 02:54:22.817867       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 02:54:22.817898       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 02:54:22.915070       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 02:54:22.915110       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 02:54:22.915078       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5b278aa8fa67a48484345b54805bb5c29e9162a5f54af5ee074fef8c5766b072] <==
	I1119 02:54:19.921478       1 serving.go:386] Generated self-signed cert in-memory
	W1119 02:54:22.605585       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 02:54:22.605681       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 02:54:22.605716       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 02:54:22.605744       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 02:54:22.692638       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 02:54:22.692755       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:54:22.694844       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:54:22.694941       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:54:22.702773       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 02:54:22.702924       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 02:54:22.798150       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e3f1e86ddd1d329884483c2ad1df7a1973076d6ec408d93655d53c17a56315e3] <==
	E1119 02:53:15.427446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:53:15.427513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:53:15.427628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:53:15.427704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:53:15.428357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 02:53:16.265643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 02:53:16.347973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 02:53:16.350436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 02:53:16.380950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 02:53:16.386436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 02:53:16.417121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 02:53:16.443616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 02:53:16.479547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 02:53:16.537795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 02:53:16.607410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 02:53:16.629869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 02:53:16.630004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 02:53:16.649763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1119 02:53:19.566799       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:54:09.308468       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1119 02:54:09.308596       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1119 02:54:09.308608       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1119 02:54:09.308625       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:54:09.308880       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1119 02:54:09.308897       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.545046    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d320cc8614c047ff979ab73a2d0c54ae" pod="kube-system/etcd-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.545364    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="7ada54b88577f8537950907f13b1cc63" pod="kube-system/kube-controller-manager-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.545749    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="65cf838043746d121fddeeac147c794c" pod="kube-system/kube-apiserver-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.546103    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-w68ds\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6af1936b-8342-4b94-8c66-84cea32746ff" pod="kube-system/kindnet-w68ds"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.546444    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r7bhh\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d06bc070-5f4f-4e5d-9268-f0bbefdd7fdb" pod="kube-system/kube-proxy-r7bhh"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.547027    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-p4snv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="35d307ff-e63a-486d-9eb8-95e7cf67119f" pod="kube-system/coredns-66bc5c9577-p4snv"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: I1119 02:54:16.551297    1321 scope.go:117] "RemoveContainer" containerID="99c5fdf54f0795b8af2a7e440cbeb21a2991c76dbb380f799c0f0a3f93211efa"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.551962    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="79bb44ecbd60873c555aabcdc1b97eff" pod="kube-system/kube-scheduler-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.552287    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d320cc8614c047ff979ab73a2d0c54ae" pod="kube-system/etcd-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.552616    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="7ada54b88577f8537950907f13b1cc63" pod="kube-system/kube-controller-manager-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.553428    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-210634\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="65cf838043746d121fddeeac147c794c" pod="kube-system/kube-apiserver-pause-210634"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.553855    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-w68ds\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6af1936b-8342-4b94-8c66-84cea32746ff" pod="kube-system/kindnet-w68ds"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.554559    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r7bhh\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d06bc070-5f4f-4e5d-9268-f0bbefdd7fdb" pod="kube-system/kube-proxy-r7bhh"
	Nov 19 02:54:16 pause-210634 kubelet[1321]: E1119 02:54:16.555756    1321 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-p4snv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="35d307ff-e63a-486d-9eb8-95e7cf67119f" pod="kube-system/coredns-66bc5c9577-p4snv"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.376642    1321 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-210634\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.377002    1321 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-210634\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.377177    1321 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-w68ds\" is forbidden: User \"system:node:pause-210634\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" podUID="6af1936b-8342-4b94-8c66-84cea32746ff" pod="kube-system/kindnet-w68ds"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.378022    1321 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-210634\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.447401    1321 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-r7bhh\" is forbidden: User \"system:node:pause-210634\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" podUID="d06bc070-5f4f-4e5d-9268-f0bbefdd7fdb" pod="kube-system/kube-proxy-r7bhh"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.624904    1321 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-p4snv\" is forbidden: User \"system:node:pause-210634\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" podUID="35d307ff-e63a-486d-9eb8-95e7cf67119f" pod="kube-system/coredns-66bc5c9577-p4snv"
	Nov 19 02:54:22 pause-210634 kubelet[1321]: E1119 02:54:22.642948    1321 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-210634\" is forbidden: User \"system:node:pause-210634\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-210634' and this object" podUID="79bb44ecbd60873c555aabcdc1b97eff" pod="kube-system/kube-scheduler-pause-210634"
	Nov 19 02:54:28 pause-210634 kubelet[1321]: W1119 02:54:28.582548    1321 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 19 02:54:34 pause-210634 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:54:34 pause-210634 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:54:34 pause-210634 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-210634 -n pause-210634
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-210634 -n pause-210634: exit status 2 (367.600632ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-210634 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-525469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-525469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (269.984915ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:58:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-525469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-525469 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-525469 describe deploy/metrics-server -n kube-system: exit status 1 (86.985755ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-525469 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-525469
helpers_test.go:243: (dbg) docker inspect old-k8s-version-525469:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9",
	        "Created": "2025-11-19T02:56:56.874847167Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1642046,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:56:56.952288205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/hosts",
	        "LogPath": "/var/lib/docker/containers/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9-json.log",
	        "Name": "/old-k8s-version-525469",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-525469:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-525469",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9",
	                "LowerDir": "/var/lib/docker/overlay2/6626ee3152a36e280c4cbe358e2f948d8df311fa8c08ac4c768b9ba1c425fba4-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6626ee3152a36e280c4cbe358e2f948d8df311fa8c08ac4c768b9ba1c425fba4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6626ee3152a36e280c4cbe358e2f948d8df311fa8c08ac4c768b9ba1c425fba4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6626ee3152a36e280c4cbe358e2f948d8df311fa8c08ac4c768b9ba1c425fba4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-525469",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-525469/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-525469",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-525469",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-525469",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d204c5651a5775a00c508588fd8db5520bcd047ebdfd79e3ead7f7f05ea5969",
	            "SandboxKey": "/var/run/docker/netns/2d204c5651a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34895"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34896"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34899"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34897"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34898"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-525469": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:04:93:14:cd:49",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfcb5e1a34a21f833f4806a9351850a2b1b407ff4f69e6c1e4043b73bcdc3f29",
	                    "EndpointID": "ed216f0d59b609e1983023087651056766a81d3f03acb9e8f928d0c9a964104e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-525469",
	                        "8d5d18297d31"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-525469 -n old-k8s-version-525469
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-525469 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-525469 logs -n 25: (1.181454016s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-889743 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo containerd config dump                                                                                                                                                                                                  │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-315505                                                                                                                                                                                                                  │ kubernetes-upgrade-315505 │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:55 UTC │
	│ ssh     │ -p cilium-889743 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo crio config                                                                                                                                                                                                             │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ delete  │ -p cilium-889743                                                                                                                                                                                                                              │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:55 UTC │
	│ start   │ -p force-systemd-env-335811 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-335811  │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p cert-expiration-422184 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-422184    │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:56 UTC │
	│ delete  │ -p force-systemd-env-335811                                                                                                                                                                                                                   │ force-systemd-env-335811  │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p cert-options-702842 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-702842       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ ssh     │ cert-options-702842 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-702842       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ ssh     │ -p cert-options-702842 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-702842       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ delete  │ -p cert-options-702842                                                                                                                                                                                                                        │ cert-options-702842       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-525469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:56:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:56:51.084472 1641653 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:56:51.084714 1641653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:56:51.084895 1641653 out.go:374] Setting ErrFile to fd 2...
	I1119 02:56:51.084933 1641653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:56:51.085306 1641653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:56:51.085882 1641653 out.go:368] Setting JSON to false
	I1119 02:56:51.087077 1641653 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38338,"bootTime":1763482673,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 02:56:51.087200 1641653 start.go:143] virtualization:  
	I1119 02:56:51.091322 1641653 out.go:179] * [old-k8s-version-525469] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 02:56:51.096336 1641653 notify.go:221] Checking for updates...
	I1119 02:56:51.097297 1641653 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:56:51.100887 1641653 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:56:51.104305 1641653 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:56:51.107583 1641653 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 02:56:51.110936 1641653 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 02:56:51.114291 1641653 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:56:51.118151 1641653 config.go:182] Loaded profile config "cert-expiration-422184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:56:51.118274 1641653 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:56:51.151266 1641653 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 02:56:51.151463 1641653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:56:51.218618 1641653 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 02:56:51.208877809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:56:51.218736 1641653 docker.go:319] overlay module found
	I1119 02:56:51.222116 1641653 out.go:179] * Using the docker driver based on user configuration
	I1119 02:56:51.225138 1641653 start.go:309] selected driver: docker
	I1119 02:56:51.225168 1641653 start.go:930] validating driver "docker" against <nil>
	I1119 02:56:51.225181 1641653 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:56:51.226083 1641653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:56:51.279583 1641653 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 02:56:51.271006873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:56:51.279736 1641653 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 02:56:51.279962 1641653 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:56:51.283130 1641653 out.go:179] * Using Docker driver with root privileges
	I1119 02:56:51.286137 1641653 cni.go:84] Creating CNI manager for ""
	I1119 02:56:51.286200 1641653 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:56:51.286214 1641653 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:56:51.286298 1641653 start.go:353] cluster config:
	{Name:old-k8s-version-525469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-525469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:56:51.291223 1641653 out.go:179] * Starting "old-k8s-version-525469" primary control-plane node in "old-k8s-version-525469" cluster
	I1119 02:56:51.294226 1641653 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:56:51.297241 1641653 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:56:51.300103 1641653 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 02:56:51.300149 1641653 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1119 02:56:51.300162 1641653 cache.go:65] Caching tarball of preloaded images
	I1119 02:56:51.300184 1641653 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:56:51.300250 1641653 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 02:56:51.300260 1641653 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1119 02:56:51.300367 1641653 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/config.json ...
	I1119 02:56:51.300386 1641653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/config.json: {Name:mk1796b1577c41ec209353ae8b039d3c3b243a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:56:51.319349 1641653 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:56:51.319370 1641653 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:56:51.319389 1641653 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:56:51.319412 1641653 start.go:360] acquireMachinesLock for old-k8s-version-525469: {Name:mke273077ae45177d2c7d6a69d1cf3f0fa926148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:56:51.319522 1641653 start.go:364] duration metric: took 95.817µs to acquireMachinesLock for "old-k8s-version-525469"
	I1119 02:56:51.319548 1641653 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-525469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-525469 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:56:51.319614 1641653 start.go:125] createHost starting for "" (driver="docker")
	I1119 02:56:51.323129 1641653 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:56:51.323347 1641653 start.go:159] libmachine.API.Create for "old-k8s-version-525469" (driver="docker")
	I1119 02:56:51.323442 1641653 client.go:173] LocalClient.Create starting
	I1119 02:56:51.323577 1641653 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem
	I1119 02:56:51.323673 1641653 main.go:143] libmachine: Decoding PEM data...
	I1119 02:56:51.323693 1641653 main.go:143] libmachine: Parsing certificate...
	I1119 02:56:51.323732 1641653 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem
	I1119 02:56:51.323755 1641653 main.go:143] libmachine: Decoding PEM data...
	I1119 02:56:51.323768 1641653 main.go:143] libmachine: Parsing certificate...
	I1119 02:56:51.324131 1641653 cli_runner.go:164] Run: docker network inspect old-k8s-version-525469 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:56:51.343119 1641653 cli_runner.go:211] docker network inspect old-k8s-version-525469 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:56:51.343219 1641653 network_create.go:284] running [docker network inspect old-k8s-version-525469] to gather additional debugging logs...
	I1119 02:56:51.343243 1641653 cli_runner.go:164] Run: docker network inspect old-k8s-version-525469
	W1119 02:56:51.359009 1641653 cli_runner.go:211] docker network inspect old-k8s-version-525469 returned with exit code 1
	I1119 02:56:51.359040 1641653 network_create.go:287] error running [docker network inspect old-k8s-version-525469]: docker network inspect old-k8s-version-525469: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-525469 not found
	I1119 02:56:51.359056 1641653 network_create.go:289] output of [docker network inspect old-k8s-version-525469]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-525469 not found
	
	** /stderr **
	I1119 02:56:51.359152 1641653 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:56:51.375289 1641653 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-30778cc553ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:62:24:59:d9:05:e6} reservation:<nil>}
	I1119 02:56:51.375689 1641653 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-564f8befa544 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:bb:c9:f1:3d:0c} reservation:<nil>}
	I1119 02:56:51.375924 1641653 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fccf9ce7bac2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:92:9c:a6:ca:f9:d9} reservation:<nil>}
	I1119 02:56:51.376210 1641653 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8b1bb2c80c17 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:f6:32:f4:5d:c8:d0} reservation:<nil>}
	I1119 02:56:51.376849 1641653 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a42540}
	I1119 02:56:51.376872 1641653 network_create.go:124] attempt to create docker network old-k8s-version-525469 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 02:56:51.376925 1641653 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-525469 old-k8s-version-525469
	I1119 02:56:51.452784 1641653 network_create.go:108] docker network old-k8s-version-525469 192.168.85.0/24 created
	I1119 02:56:51.452812 1641653 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-525469" container
	I1119 02:56:51.452880 1641653 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:56:51.468724 1641653 cli_runner.go:164] Run: docker volume create old-k8s-version-525469 --label name.minikube.sigs.k8s.io=old-k8s-version-525469 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:56:51.485938 1641653 oci.go:103] Successfully created a docker volume old-k8s-version-525469
	I1119 02:56:51.486032 1641653 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-525469-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-525469 --entrypoint /usr/bin/test -v old-k8s-version-525469:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:56:52.043195 1641653 oci.go:107] Successfully prepared a docker volume old-k8s-version-525469
	I1119 02:56:52.043267 1641653 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 02:56:52.043282 1641653 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:56:52.043348 1641653 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-525469:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 02:56:56.793767 1641653 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-525469:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.750373367s)
	I1119 02:56:56.793796 1641653 kic.go:203] duration metric: took 4.750510035s to extract preloaded images to volume ...
	W1119 02:56:56.793930 1641653 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 02:56:56.794045 1641653 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:56:56.855337 1641653 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-525469 --name old-k8s-version-525469 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-525469 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-525469 --network old-k8s-version-525469 --ip 192.168.85.2 --volume old-k8s-version-525469:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:56:57.179936 1641653 cli_runner.go:164] Run: docker container inspect old-k8s-version-525469 --format={{.State.Running}}
	I1119 02:56:57.206896 1641653 cli_runner.go:164] Run: docker container inspect old-k8s-version-525469 --format={{.State.Status}}
	I1119 02:56:57.232578 1641653 cli_runner.go:164] Run: docker exec old-k8s-version-525469 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:56:57.284143 1641653 oci.go:144] the created container "old-k8s-version-525469" has a running status.
	I1119 02:56:57.284174 1641653 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/old-k8s-version-525469/id_rsa...
	I1119 02:56:58.118078 1641653 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/old-k8s-version-525469/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:56:58.138329 1641653 cli_runner.go:164] Run: docker container inspect old-k8s-version-525469 --format={{.State.Status}}
	I1119 02:56:58.153848 1641653 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:56:58.153868 1641653 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-525469 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:56:58.199958 1641653 cli_runner.go:164] Run: docker container inspect old-k8s-version-525469 --format={{.State.Status}}
	I1119 02:56:58.219643 1641653 machine.go:94] provisionDockerMachine start ...
	I1119 02:56:58.219739 1641653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-525469
	I1119 02:56:58.239182 1641653 main.go:143] libmachine: Using SSH client type: native
	I1119 02:56:58.239534 1641653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34895 <nil> <nil>}
	I1119 02:56:58.239545 1641653 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:56:58.397002 1641653 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-525469
	
	I1119 02:56:58.397068 1641653 ubuntu.go:182] provisioning hostname "old-k8s-version-525469"
	I1119 02:56:58.397174 1641653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-525469
	I1119 02:56:58.418139 1641653 main.go:143] libmachine: Using SSH client type: native
	I1119 02:56:58.418439 1641653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34895 <nil> <nil>}
	I1119 02:56:58.418450 1641653 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-525469 && echo "old-k8s-version-525469" | sudo tee /etc/hostname
	I1119 02:56:58.576403 1641653 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-525469
	
	I1119 02:56:58.576483 1641653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-525469
	I1119 02:56:58.594257 1641653 main.go:143] libmachine: Using SSH client type: native
	I1119 02:56:58.594583 1641653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34895 <nil> <nil>}
	I1119 02:56:58.594607 1641653 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-525469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-525469/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-525469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:56:58.737557 1641653 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:56:58.737583 1641653 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 02:56:58.737611 1641653 ubuntu.go:190] setting up certificates
	I1119 02:56:58.737621 1641653 provision.go:84] configureAuth start
	I1119 02:56:58.737681 1641653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-525469
	I1119 02:56:58.754364 1641653 provision.go:143] copyHostCerts
	I1119 02:56:58.754435 1641653 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 02:56:58.754448 1641653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 02:56:58.754531 1641653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 02:56:58.754638 1641653 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 02:56:58.754654 1641653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 02:56:58.754684 1641653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 02:56:58.754744 1641653 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 02:56:58.754753 1641653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 02:56:58.754778 1641653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 02:56:58.754831 1641653 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-525469 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-525469]
	I1119 02:57:01.329951 1641653 provision.go:177] copyRemoteCerts
	I1119 02:57:01.330071 1641653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:57:01.330226 1641653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-525469
	I1119 02:57:01.348506 1641653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34895 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/old-k8s-version-525469/id_rsa Username:docker}
	I1119 02:57:01.457327 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 02:57:01.481168 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 02:57:01.498938 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1119 02:57:01.521480 1641653 provision.go:87] duration metric: took 2.783844024s to configureAuth
	I1119 02:57:01.521550 1641653 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:57:01.521809 1641653 config.go:182] Loaded profile config "old-k8s-version-525469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 02:57:01.521917 1641653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-525469
	I1119 02:57:01.540183 1641653 main.go:143] libmachine: Using SSH client type: native
	I1119 02:57:01.540496 1641653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34895 <nil> <nil>}
	I1119 02:57:01.540517 1641653 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:57:01.861840 1641653 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:57:01.861865 1641653 machine.go:97] duration metric: took 3.642195132s to provisionDockerMachine
	I1119 02:57:01.861874 1641653 client.go:176] duration metric: took 10.538405744s to LocalClient.Create
	I1119 02:57:01.861888 1641653 start.go:167] duration metric: took 10.538541495s to libmachine.API.Create "old-k8s-version-525469"
	I1119 02:57:01.861894 1641653 start.go:293] postStartSetup for "old-k8s-version-525469" (driver="docker")
	I1119 02:57:01.861904 1641653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:57:01.861969 1641653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:57:01.862020 1641653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-525469
	I1119 02:57:01.880380 1641653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34895 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/old-k8s-version-525469/id_rsa Username:docker}
	I1119 02:57:01.985463 1641653 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:57:01.988781 1641653 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:57:01.988807 1641653 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:57:01.988818 1641653 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 02:57:01.988884 1641653 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 02:57:01.988962 1641653 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 02:57:01.989062 1641653 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:57:01.997129 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:57:02.018815 1641653 start.go:296] duration metric: took 156.904525ms for postStartSetup
	I1119 02:57:02.019280 1641653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-525469
	I1119 02:57:02.037270 1641653 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/config.json ...
	I1119 02:57:02.037673 1641653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:57:02.037726 1641653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-525469
	I1119 02:57:02.054890 1641653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34895 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/old-k8s-version-525469/id_rsa Username:docker}
	I1119 02:57:02.158896 1641653 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:57:02.163806 1641653 start.go:128] duration metric: took 10.844176394s to createHost
	I1119 02:57:02.163833 1641653 start.go:83] releasing machines lock for "old-k8s-version-525469", held for 10.844302117s
	I1119 02:57:02.163931 1641653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-525469
	I1119 02:57:02.181367 1641653 ssh_runner.go:195] Run: cat /version.json
	I1119 02:57:02.181426 1641653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-525469
	I1119 02:57:02.181691 1641653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:57:02.181763 1641653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-525469
	I1119 02:57:02.213316 1641653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34895 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/old-k8s-version-525469/id_rsa Username:docker}
	I1119 02:57:02.216836 1641653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34895 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/old-k8s-version-525469/id_rsa Username:docker}
	I1119 02:57:02.427023 1641653 ssh_runner.go:195] Run: systemctl --version
	I1119 02:57:02.433314 1641653 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:57:02.468301 1641653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:57:02.472567 1641653 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:57:02.472635 1641653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:57:02.503102 1641653 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 02:57:02.503128 1641653 start.go:496] detecting cgroup driver to use...
	I1119 02:57:02.503160 1641653 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 02:57:02.503210 1641653 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:57:02.521321 1641653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:57:02.537277 1641653 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:57:02.537379 1641653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:57:02.561721 1641653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:57:02.584960 1641653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:57:02.781867 1641653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:57:02.937129 1641653 docker.go:234] disabling docker service ...
	I1119 02:57:02.937224 1641653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:57:02.961833 1641653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:57:02.975603 1641653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:57:03.153041 1641653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:57:03.273113 1641653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:57:03.287031 1641653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:57:03.301671 1641653 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1119 02:57:03.301786 1641653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:57:03.311317 1641653 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 02:57:03.311436 1641653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:57:03.320284 1641653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:57:03.329620 1641653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:57:03.339024 1641653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:57:03.346835 1641653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:57:03.355475 1641653 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:57:03.368468 1641653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:57:03.377869 1641653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:57:03.386359 1641653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:57:03.393647 1641653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:57:03.521829 1641653 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:57:03.686590 1641653 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:57:03.686684 1641653 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:57:03.690424 1641653 start.go:564] Will wait 60s for crictl version
	I1119 02:57:03.690524 1641653 ssh_runner.go:195] Run: which crictl
	I1119 02:57:03.694011 1641653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:57:03.724160 1641653 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:57:03.724296 1641653 ssh_runner.go:195] Run: crio --version
	I1119 02:57:03.753114 1641653 ssh_runner.go:195] Run: crio --version
	I1119 02:57:03.786909 1641653 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1119 02:57:03.789838 1641653 cli_runner.go:164] Run: docker network inspect old-k8s-version-525469 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:57:03.807796 1641653 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 02:57:03.811599 1641653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:57:03.822202 1641653 kubeadm.go:884] updating cluster {Name:old-k8s-version-525469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-525469 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:57:03.822334 1641653 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 02:57:03.822395 1641653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:57:03.852531 1641653 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:57:03.852556 1641653 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:57:03.852611 1641653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:57:03.877998 1641653 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:57:03.878022 1641653 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:57:03.878031 1641653 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1119 02:57:03.878116 1641653 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-525469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-525469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:57:03.878205 1641653 ssh_runner.go:195] Run: crio config
	I1119 02:57:03.956481 1641653 cni.go:84] Creating CNI manager for ""
	I1119 02:57:03.956519 1641653 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:57:03.956555 1641653 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:57:03.956600 1641653 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-525469 NodeName:old-k8s-version-525469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:57:03.956775 1641653 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-525469"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:57:03.956873 1641653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1119 02:57:03.964574 1641653 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:57:03.964649 1641653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:57:03.972200 1641653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1119 02:57:03.984977 1641653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:57:03.997299 1641653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1119 02:57:04.012964 1641653 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:57:04.016868 1641653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:57:04.027396 1641653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:57:04.148797 1641653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:57:04.166001 1641653 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469 for IP: 192.168.85.2
	I1119 02:57:04.166025 1641653 certs.go:195] generating shared ca certs ...
	I1119 02:57:04.166042 1641653 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:57:04.166186 1641653 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 02:57:04.166233 1641653 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 02:57:04.166243 1641653 certs.go:257] generating profile certs ...
	I1119 02:57:04.166304 1641653 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.key
	I1119 02:57:04.166320 1641653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt with IP's: []
	I1119 02:57:04.939936 1641653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt ...
	I1119 02:57:04.939972 1641653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: {Name:mk675e96f955d0901b24253c95ca21a478ce8304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:57:04.940176 1641653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.key ...
	I1119 02:57:04.940191 1641653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.key: {Name:mk3f0c3d777626fad7a7c55622f30dec4023e816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:57:04.940289 1641653 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/apiserver.key.565ba304
	I1119 02:57:04.940307 1641653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/apiserver.crt.565ba304 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 02:57:05.058724 1641653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/apiserver.crt.565ba304 ...
	I1119 02:57:05.058754 1641653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/apiserver.crt.565ba304: {Name:mk30c57468c352b7e5d9e763346af6e51459b13a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:57:05.058967 1641653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/apiserver.key.565ba304 ...
	I1119 02:57:05.058983 1641653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/apiserver.key.565ba304: {Name:mk9a787fb21f1314befa7aedf706d598628b1c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:57:05.059074 1641653 certs.go:382] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/apiserver.crt.565ba304 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/apiserver.crt
	I1119 02:57:05.059162 1641653 certs.go:386] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/apiserver.key.565ba304 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/apiserver.key
	I1119 02:57:05.059227 1641653 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/proxy-client.key
	I1119 02:57:05.059246 1641653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/proxy-client.crt with IP's: []
	I1119 02:57:05.694393 1641653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/proxy-client.crt ...
	I1119 02:57:05.694427 1641653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/proxy-client.crt: {Name:mk1b2b7bb0a7731034574df12c5536e803aaa884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:57:05.694619 1641653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/proxy-client.key ...
	I1119 02:57:05.694634 1641653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/proxy-client.key: {Name:mk715e57e97cf3bbae350eaa2bc7b12b17072346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:57:05.694870 1641653 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 02:57:05.694929 1641653 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 02:57:05.694943 1641653 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 02:57:05.694967 1641653 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 02:57:05.694995 1641653 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:57:05.695022 1641653 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 02:57:05.695070 1641653 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:57:05.695737 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:57:05.714560 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:57:05.735419 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:57:05.753732 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:57:05.784056 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1119 02:57:05.806605 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 02:57:05.830602 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:57:05.851746 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:57:05.869243 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 02:57:05.888863 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 02:57:05.911206 1641653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:57:05.931670 1641653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:57:05.944390 1641653 ssh_runner.go:195] Run: openssl version
	I1119 02:57:05.951214 1641653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 02:57:05.959862 1641653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 02:57:05.963737 1641653 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 02:57:05.963799 1641653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 02:57:06.006174 1641653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 02:57:06.016094 1641653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 02:57:06.025539 1641653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 02:57:06.030782 1641653 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 02:57:06.030848 1641653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 02:57:06.072605 1641653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:57:06.081397 1641653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:57:06.090536 1641653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:57:06.094456 1641653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:57:06.094528 1641653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:57:06.137601 1641653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:57:06.146744 1641653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:57:06.150218 1641653 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:57:06.150270 1641653 kubeadm.go:401] StartCluster: {Name:old-k8s-version-525469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-525469 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:57:06.150362 1641653 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:57:06.150434 1641653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:57:06.181238 1641653 cri.go:89] found id: ""
	I1119 02:57:06.181384 1641653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:57:06.189488 1641653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:57:06.196771 1641653 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:57:06.196865 1641653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:57:06.204303 1641653 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:57:06.204322 1641653 kubeadm.go:158] found existing configuration files:
	
	I1119 02:57:06.204400 1641653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:57:06.212260 1641653 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:57:06.212362 1641653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:57:06.219679 1641653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:57:06.227639 1641653 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:57:06.227750 1641653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:57:06.234994 1641653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:57:06.242557 1641653 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:57:06.242645 1641653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:57:06.249864 1641653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:57:06.257127 1641653 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:57:06.257187 1641653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:57:06.264200 1641653 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:57:06.350639 1641653 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 02:57:06.444271 1641653 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 02:57:24.845052 1641653 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1119 02:57:24.845109 1641653 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:57:24.845200 1641653 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:57:24.845257 1641653 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 02:57:24.845293 1641653 kubeadm.go:319] OS: Linux
	I1119 02:57:24.845339 1641653 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:57:24.845389 1641653 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 02:57:24.845439 1641653 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:57:24.845489 1641653 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:57:24.845551 1641653 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:57:24.845643 1641653 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:57:24.845706 1641653 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:57:24.845760 1641653 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:57:24.845813 1641653 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 02:57:24.845886 1641653 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:57:24.845985 1641653 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:57:24.846087 1641653 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1119 02:57:24.846156 1641653 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:57:24.849356 1641653 out.go:252]   - Generating certificates and keys ...
	I1119 02:57:24.849447 1641653 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:57:24.849553 1641653 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:57:24.849656 1641653 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:57:24.849745 1641653 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:57:24.849830 1641653 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:57:24.849901 1641653 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:57:24.849978 1641653 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:57:24.850124 1641653 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-525469] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 02:57:24.850190 1641653 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:57:24.850336 1641653 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-525469] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 02:57:24.850418 1641653 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:57:24.850505 1641653 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:57:24.850589 1641653 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:57:24.850653 1641653 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:57:24.850704 1641653 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:57:24.850757 1641653 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:57:24.850833 1641653 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:57:24.850887 1641653 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:57:24.850983 1641653 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:57:24.851050 1641653 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:57:24.854219 1641653 out.go:252]   - Booting up control plane ...
	I1119 02:57:24.854318 1641653 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:57:24.854474 1641653 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:57:24.854595 1641653 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:57:24.854753 1641653 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:57:24.854876 1641653 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:57:24.854949 1641653 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:57:24.855177 1641653 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1119 02:57:24.855297 1641653 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.503067 seconds
	I1119 02:57:24.855457 1641653 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 02:57:24.855633 1641653 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 02:57:24.855733 1641653 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 02:57:24.855985 1641653 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-525469 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 02:57:24.856085 1641653 kubeadm.go:319] [bootstrap-token] Using token: jx3ufr.0muadgnkr8ihwa5d
	I1119 02:57:24.859062 1641653 out.go:252]   - Configuring RBAC rules ...
	I1119 02:57:24.859235 1641653 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 02:57:24.859378 1641653 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 02:57:24.859592 1641653 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 02:57:24.859781 1641653 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 02:57:24.859959 1641653 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 02:57:24.860128 1641653 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 02:57:24.860257 1641653 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 02:57:24.860305 1641653 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 02:57:24.860355 1641653 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 02:57:24.860359 1641653 kubeadm.go:319] 
	I1119 02:57:24.860424 1641653 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 02:57:24.860428 1641653 kubeadm.go:319] 
	I1119 02:57:24.860512 1641653 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 02:57:24.860517 1641653 kubeadm.go:319] 
	I1119 02:57:24.860544 1641653 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 02:57:24.860607 1641653 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 02:57:24.860660 1641653 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 02:57:24.860665 1641653 kubeadm.go:319] 
	I1119 02:57:24.860723 1641653 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 02:57:24.860727 1641653 kubeadm.go:319] 
	I1119 02:57:24.860778 1641653 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 02:57:24.860783 1641653 kubeadm.go:319] 
	I1119 02:57:24.860839 1641653 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 02:57:24.860918 1641653 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 02:57:24.860992 1641653 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 02:57:24.860996 1641653 kubeadm.go:319] 
	I1119 02:57:24.861086 1641653 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 02:57:24.861168 1641653 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 02:57:24.861172 1641653 kubeadm.go:319] 
	I1119 02:57:24.861262 1641653 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jx3ufr.0muadgnkr8ihwa5d \
	I1119 02:57:24.861372 1641653 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a \
	I1119 02:57:24.861394 1641653 kubeadm.go:319] 	--control-plane 
	I1119 02:57:24.861398 1641653 kubeadm.go:319] 
	I1119 02:57:24.861489 1641653 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 02:57:24.861493 1641653 kubeadm.go:319] 
	I1119 02:57:24.861593 1641653 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jx3ufr.0muadgnkr8ihwa5d \
	I1119 02:57:24.861720 1641653 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a 
	I1119 02:57:24.861729 1641653 cni.go:84] Creating CNI manager for ""
	I1119 02:57:24.861736 1641653 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:57:24.864908 1641653 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 02:57:24.867833 1641653 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 02:57:24.872785 1641653 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1119 02:57:24.872804 1641653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 02:57:24.919683 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 02:57:25.914362 1641653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 02:57:25.914591 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:25.914729 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-525469 minikube.k8s.io/updated_at=2025_11_19T02_57_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=old-k8s-version-525469 minikube.k8s.io/primary=true
	I1119 02:57:26.085422 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:26.085570 1641653 ops.go:34] apiserver oom_adj: -16
	I1119 02:57:26.586430 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:27.085833 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:27.586245 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:28.086302 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:28.586019 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:29.086086 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:29.585960 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:30.086073 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:30.585714 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:31.086436 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:31.586298 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:32.086382 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:32.585567 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:33.085680 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:33.585562 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:34.086076 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:34.586457 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:35.086002 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:35.586180 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:36.086374 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:36.585989 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:37.086499 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:37.585691 1641653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 02:57:37.701852 1641653 kubeadm.go:1114] duration metric: took 11.787312773s to wait for elevateKubeSystemPrivileges
	I1119 02:57:37.701885 1641653 kubeadm.go:403] duration metric: took 31.551618981s to StartCluster
	I1119 02:57:37.701902 1641653 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:57:37.701961 1641653 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:57:37.703058 1641653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:57:37.703270 1641653 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:57:37.703392 1641653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 02:57:37.703640 1641653 config.go:182] Loaded profile config "old-k8s-version-525469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 02:57:37.703683 1641653 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 02:57:37.703754 1641653 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-525469"
	I1119 02:57:37.703770 1641653 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-525469"
	I1119 02:57:37.703791 1641653 host.go:66] Checking if "old-k8s-version-525469" exists ...
	I1119 02:57:37.704271 1641653 cli_runner.go:164] Run: docker container inspect old-k8s-version-525469 --format={{.State.Status}}
	I1119 02:57:37.704793 1641653 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-525469"
	I1119 02:57:37.704817 1641653 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-525469"
	I1119 02:57:37.705069 1641653 cli_runner.go:164] Run: docker container inspect old-k8s-version-525469 --format={{.State.Status}}
	I1119 02:57:37.707987 1641653 out.go:179] * Verifying Kubernetes components...
	I1119 02:57:37.711727 1641653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:57:37.737841 1641653 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 02:57:37.740728 1641653 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:57:37.740750 1641653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 02:57:37.740822 1641653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-525469
	I1119 02:57:37.760229 1641653 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-525469"
	I1119 02:57:37.760265 1641653 host.go:66] Checking if "old-k8s-version-525469" exists ...
	I1119 02:57:37.760669 1641653 cli_runner.go:164] Run: docker container inspect old-k8s-version-525469 --format={{.State.Status}}
	I1119 02:57:37.793658 1641653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34895 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/old-k8s-version-525469/id_rsa Username:docker}
	I1119 02:57:37.807183 1641653 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 02:57:37.807209 1641653 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 02:57:37.807279 1641653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-525469
	I1119 02:57:37.836686 1641653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34895 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/old-k8s-version-525469/id_rsa Username:docker}
	I1119 02:57:38.071400 1641653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 02:57:38.078794 1641653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 02:57:38.078914 1641653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:57:38.133405 1641653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 02:57:38.956479 1641653 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-525469" to be "Ready" ...
	I1119 02:57:38.956824 1641653 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 02:57:39.023330 1641653 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 02:57:39.038888 1641653 addons.go:515] duration metric: took 1.335165222s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 02:57:39.462541 1641653 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-525469" context rescaled to 1 replicas
	W1119 02:57:40.960854 1641653 node_ready.go:57] node "old-k8s-version-525469" has "Ready":"False" status (will retry)
	W1119 02:57:43.460269 1641653 node_ready.go:57] node "old-k8s-version-525469" has "Ready":"False" status (will retry)
	W1119 02:57:45.960265 1641653 node_ready.go:57] node "old-k8s-version-525469" has "Ready":"False" status (will retry)
	W1119 02:57:48.459724 1641653 node_ready.go:57] node "old-k8s-version-525469" has "Ready":"False" status (will retry)
	W1119 02:57:50.460388 1641653 node_ready.go:57] node "old-k8s-version-525469" has "Ready":"False" status (will retry)
	I1119 02:57:51.960553 1641653 node_ready.go:49] node "old-k8s-version-525469" is "Ready"
	I1119 02:57:51.960585 1641653 node_ready.go:38] duration metric: took 13.004034626s for node "old-k8s-version-525469" to be "Ready" ...
	I1119 02:57:51.960600 1641653 api_server.go:52] waiting for apiserver process to appear ...
	I1119 02:57:51.960666 1641653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:57:51.972519 1641653 api_server.go:72] duration metric: took 14.269214191s to wait for apiserver process to appear ...
	I1119 02:57:51.972544 1641653 api_server.go:88] waiting for apiserver healthz status ...
	I1119 02:57:51.972562 1641653 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 02:57:51.983122 1641653 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 02:57:51.984480 1641653 api_server.go:141] control plane version: v1.28.0
	I1119 02:57:51.984504 1641653 api_server.go:131] duration metric: took 11.953358ms to wait for apiserver health ...
	I1119 02:57:51.984524 1641653 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 02:57:51.988142 1641653 system_pods.go:59] 8 kube-system pods found
	I1119 02:57:51.988179 1641653 system_pods.go:61] "coredns-5dd5756b68-w8wb6" [214ec4de-7b8c-4e92-b8b7-b548713648ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:57:51.988187 1641653 system_pods.go:61] "etcd-old-k8s-version-525469" [d3195abd-84f2-4388-9bdb-0ad65ac5a3f1] Running
	I1119 02:57:51.988205 1641653 system_pods.go:61] "kindnet-rj2cj" [70f75322-19b2-4cda-8cc2-d016b36e3e78] Running
	I1119 02:57:51.988213 1641653 system_pods.go:61] "kube-apiserver-old-k8s-version-525469" [974d1163-bb7f-4fac-aaf5-1ea8365521db] Running
	I1119 02:57:51.988220 1641653 system_pods.go:61] "kube-controller-manager-old-k8s-version-525469" [1fbb4360-dfb0-4e66-b037-773e4e38cee0] Running
	I1119 02:57:51.988225 1641653 system_pods.go:61] "kube-proxy-jf89k" [1c8a750f-062c-4851-b070-6733aa4086f8] Running
	I1119 02:57:51.988229 1641653 system_pods.go:61] "kube-scheduler-old-k8s-version-525469" [3bd61184-c42a-49ff-a207-10c13f0994b9] Running
	I1119 02:57:51.988235 1641653 system_pods.go:61] "storage-provisioner" [bd46c0c0-a77e-4f54-ad2c-b9333afd81c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:57:51.988243 1641653 system_pods.go:74] duration metric: took 3.709137ms to wait for pod list to return data ...
	I1119 02:57:51.988251 1641653 default_sa.go:34] waiting for default service account to be created ...
	I1119 02:57:51.990538 1641653 default_sa.go:45] found service account: "default"
	I1119 02:57:51.990559 1641653 default_sa.go:55] duration metric: took 2.296016ms for default service account to be created ...
	I1119 02:57:51.990568 1641653 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 02:57:51.993915 1641653 system_pods.go:86] 8 kube-system pods found
	I1119 02:57:51.993946 1641653 system_pods.go:89] "coredns-5dd5756b68-w8wb6" [214ec4de-7b8c-4e92-b8b7-b548713648ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:57:51.993952 1641653 system_pods.go:89] "etcd-old-k8s-version-525469" [d3195abd-84f2-4388-9bdb-0ad65ac5a3f1] Running
	I1119 02:57:51.993959 1641653 system_pods.go:89] "kindnet-rj2cj" [70f75322-19b2-4cda-8cc2-d016b36e3e78] Running
	I1119 02:57:51.993963 1641653 system_pods.go:89] "kube-apiserver-old-k8s-version-525469" [974d1163-bb7f-4fac-aaf5-1ea8365521db] Running
	I1119 02:57:51.993968 1641653 system_pods.go:89] "kube-controller-manager-old-k8s-version-525469" [1fbb4360-dfb0-4e66-b037-773e4e38cee0] Running
	I1119 02:57:51.993973 1641653 system_pods.go:89] "kube-proxy-jf89k" [1c8a750f-062c-4851-b070-6733aa4086f8] Running
	I1119 02:57:51.993978 1641653 system_pods.go:89] "kube-scheduler-old-k8s-version-525469" [3bd61184-c42a-49ff-a207-10c13f0994b9] Running
	I1119 02:57:51.993983 1641653 system_pods.go:89] "storage-provisioner" [bd46c0c0-a77e-4f54-ad2c-b9333afd81c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:57:51.994013 1641653 retry.go:31] will retry after 274.031629ms: missing components: kube-dns
	I1119 02:57:52.272751 1641653 system_pods.go:86] 8 kube-system pods found
	I1119 02:57:52.272784 1641653 system_pods.go:89] "coredns-5dd5756b68-w8wb6" [214ec4de-7b8c-4e92-b8b7-b548713648ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:57:52.272791 1641653 system_pods.go:89] "etcd-old-k8s-version-525469" [d3195abd-84f2-4388-9bdb-0ad65ac5a3f1] Running
	I1119 02:57:52.272797 1641653 system_pods.go:89] "kindnet-rj2cj" [70f75322-19b2-4cda-8cc2-d016b36e3e78] Running
	I1119 02:57:52.272803 1641653 system_pods.go:89] "kube-apiserver-old-k8s-version-525469" [974d1163-bb7f-4fac-aaf5-1ea8365521db] Running
	I1119 02:57:52.272809 1641653 system_pods.go:89] "kube-controller-manager-old-k8s-version-525469" [1fbb4360-dfb0-4e66-b037-773e4e38cee0] Running
	I1119 02:57:52.272813 1641653 system_pods.go:89] "kube-proxy-jf89k" [1c8a750f-062c-4851-b070-6733aa4086f8] Running
	I1119 02:57:52.272818 1641653 system_pods.go:89] "kube-scheduler-old-k8s-version-525469" [3bd61184-c42a-49ff-a207-10c13f0994b9] Running
	I1119 02:57:52.272825 1641653 system_pods.go:89] "storage-provisioner" [bd46c0c0-a77e-4f54-ad2c-b9333afd81c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:57:52.272843 1641653 retry.go:31] will retry after 244.315849ms: missing components: kube-dns
	I1119 02:57:52.521782 1641653 system_pods.go:86] 8 kube-system pods found
	I1119 02:57:52.521828 1641653 system_pods.go:89] "coredns-5dd5756b68-w8wb6" [214ec4de-7b8c-4e92-b8b7-b548713648ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:57:52.521836 1641653 system_pods.go:89] "etcd-old-k8s-version-525469" [d3195abd-84f2-4388-9bdb-0ad65ac5a3f1] Running
	I1119 02:57:52.521845 1641653 system_pods.go:89] "kindnet-rj2cj" [70f75322-19b2-4cda-8cc2-d016b36e3e78] Running
	I1119 02:57:52.521849 1641653 system_pods.go:89] "kube-apiserver-old-k8s-version-525469" [974d1163-bb7f-4fac-aaf5-1ea8365521db] Running
	I1119 02:57:52.521854 1641653 system_pods.go:89] "kube-controller-manager-old-k8s-version-525469" [1fbb4360-dfb0-4e66-b037-773e4e38cee0] Running
	I1119 02:57:52.521858 1641653 system_pods.go:89] "kube-proxy-jf89k" [1c8a750f-062c-4851-b070-6733aa4086f8] Running
	I1119 02:57:52.521863 1641653 system_pods.go:89] "kube-scheduler-old-k8s-version-525469" [3bd61184-c42a-49ff-a207-10c13f0994b9] Running
	I1119 02:57:52.521869 1641653 system_pods.go:89] "storage-provisioner" [bd46c0c0-a77e-4f54-ad2c-b9333afd81c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:57:52.521887 1641653 retry.go:31] will retry after 464.753ms: missing components: kube-dns
	I1119 02:57:52.991512 1641653 system_pods.go:86] 8 kube-system pods found
	I1119 02:57:52.991599 1641653 system_pods.go:89] "coredns-5dd5756b68-w8wb6" [214ec4de-7b8c-4e92-b8b7-b548713648ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 02:57:52.991630 1641653 system_pods.go:89] "etcd-old-k8s-version-525469" [d3195abd-84f2-4388-9bdb-0ad65ac5a3f1] Running
	I1119 02:57:52.991645 1641653 system_pods.go:89] "kindnet-rj2cj" [70f75322-19b2-4cda-8cc2-d016b36e3e78] Running
	I1119 02:57:52.991651 1641653 system_pods.go:89] "kube-apiserver-old-k8s-version-525469" [974d1163-bb7f-4fac-aaf5-1ea8365521db] Running
	I1119 02:57:52.991656 1641653 system_pods.go:89] "kube-controller-manager-old-k8s-version-525469" [1fbb4360-dfb0-4e66-b037-773e4e38cee0] Running
	I1119 02:57:52.991660 1641653 system_pods.go:89] "kube-proxy-jf89k" [1c8a750f-062c-4851-b070-6733aa4086f8] Running
	I1119 02:57:52.991664 1641653 system_pods.go:89] "kube-scheduler-old-k8s-version-525469" [3bd61184-c42a-49ff-a207-10c13f0994b9] Running
	I1119 02:57:52.991670 1641653 system_pods.go:89] "storage-provisioner" [bd46c0c0-a77e-4f54-ad2c-b9333afd81c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 02:57:52.991689 1641653 retry.go:31] will retry after 439.289876ms: missing components: kube-dns
	I1119 02:57:53.434983 1641653 system_pods.go:86] 8 kube-system pods found
	I1119 02:57:53.435017 1641653 system_pods.go:89] "coredns-5dd5756b68-w8wb6" [214ec4de-7b8c-4e92-b8b7-b548713648ba] Running
	I1119 02:57:53.435024 1641653 system_pods.go:89] "etcd-old-k8s-version-525469" [d3195abd-84f2-4388-9bdb-0ad65ac5a3f1] Running
	I1119 02:57:53.435030 1641653 system_pods.go:89] "kindnet-rj2cj" [70f75322-19b2-4cda-8cc2-d016b36e3e78] Running
	I1119 02:57:53.435035 1641653 system_pods.go:89] "kube-apiserver-old-k8s-version-525469" [974d1163-bb7f-4fac-aaf5-1ea8365521db] Running
	I1119 02:57:53.435041 1641653 system_pods.go:89] "kube-controller-manager-old-k8s-version-525469" [1fbb4360-dfb0-4e66-b037-773e4e38cee0] Running
	I1119 02:57:53.435045 1641653 system_pods.go:89] "kube-proxy-jf89k" [1c8a750f-062c-4851-b070-6733aa4086f8] Running
	I1119 02:57:53.435049 1641653 system_pods.go:89] "kube-scheduler-old-k8s-version-525469" [3bd61184-c42a-49ff-a207-10c13f0994b9] Running
	I1119 02:57:53.435054 1641653 system_pods.go:89] "storage-provisioner" [bd46c0c0-a77e-4f54-ad2c-b9333afd81c6] Running
	I1119 02:57:53.435063 1641653 system_pods.go:126] duration metric: took 1.44448911s to wait for k8s-apps to be running ...
	I1119 02:57:53.435071 1641653 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 02:57:53.435131 1641653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:57:53.448436 1641653 system_svc.go:56] duration metric: took 13.354786ms WaitForService to wait for kubelet
	I1119 02:57:53.448464 1641653 kubeadm.go:587] duration metric: took 15.74516487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:57:53.448493 1641653 node_conditions.go:102] verifying NodePressure condition ...
	I1119 02:57:53.451513 1641653 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 02:57:53.451545 1641653 node_conditions.go:123] node cpu capacity is 2
	I1119 02:57:53.451558 1641653 node_conditions.go:105] duration metric: took 3.059163ms to run NodePressure ...
	I1119 02:57:53.451568 1641653 start.go:242] waiting for startup goroutines ...
	I1119 02:57:53.451576 1641653 start.go:247] waiting for cluster config update ...
	I1119 02:57:53.451588 1641653 start.go:256] writing updated cluster config ...
	I1119 02:57:53.451870 1641653 ssh_runner.go:195] Run: rm -f paused
	I1119 02:57:53.455330 1641653 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:57:53.459464 1641653 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-w8wb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:57:53.464649 1641653 pod_ready.go:94] pod "coredns-5dd5756b68-w8wb6" is "Ready"
	I1119 02:57:53.464675 1641653 pod_ready.go:86] duration metric: took 5.183409ms for pod "coredns-5dd5756b68-w8wb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:57:53.467725 1641653 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-525469" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:57:53.472408 1641653 pod_ready.go:94] pod "etcd-old-k8s-version-525469" is "Ready"
	I1119 02:57:53.472432 1641653 pod_ready.go:86] duration metric: took 4.677684ms for pod "etcd-old-k8s-version-525469" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:57:53.475363 1641653 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-525469" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:57:53.480059 1641653 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-525469" is "Ready"
	I1119 02:57:53.480086 1641653 pod_ready.go:86] duration metric: took 4.699558ms for pod "kube-apiserver-old-k8s-version-525469" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:57:53.484013 1641653 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-525469" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:57:53.859291 1641653 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-525469" is "Ready"
	I1119 02:57:53.859319 1641653 pod_ready.go:86] duration metric: took 375.239928ms for pod "kube-controller-manager-old-k8s-version-525469" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:57:54.060033 1641653 pod_ready.go:83] waiting for pod "kube-proxy-jf89k" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:57:54.458942 1641653 pod_ready.go:94] pod "kube-proxy-jf89k" is "Ready"
	I1119 02:57:54.459008 1641653 pod_ready.go:86] duration metric: took 398.949086ms for pod "kube-proxy-jf89k" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:57:54.659620 1641653 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-525469" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:57:55.060098 1641653 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-525469" is "Ready"
	I1119 02:57:55.060189 1641653 pod_ready.go:86] duration metric: took 400.539966ms for pod "kube-scheduler-old-k8s-version-525469" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 02:57:55.060218 1641653 pod_ready.go:40] duration metric: took 1.604858391s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 02:57:55.117820 1641653 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1119 02:57:55.121615 1641653 out.go:203] 
	W1119 02:57:55.124531 1641653 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 02:57:55.127544 1641653 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 02:57:55.130512 1641653 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-525469" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 02:57:52 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:52.190663886Z" level=info msg="Created container 950ff0745dd99ee29d1311bd1e32ddec183f822713bef8c73ebba8a7ba69fa2e: kube-system/coredns-5dd5756b68-w8wb6/coredns" id=bfd1273b-ebb1-4bbb-852a-4835fdd9d4a1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:57:52 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:52.193958562Z" level=info msg="Starting container: 950ff0745dd99ee29d1311bd1e32ddec183f822713bef8c73ebba8a7ba69fa2e" id=12d927bf-6693-448f-bb9d-46cf0809c5f6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:57:52 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:52.196496877Z" level=info msg="Started container" PID=1947 containerID=950ff0745dd99ee29d1311bd1e32ddec183f822713bef8c73ebba8a7ba69fa2e description=kube-system/coredns-5dd5756b68-w8wb6/coredns id=12d927bf-6693-448f-bb9d-46cf0809c5f6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=612bf199492806ec9ea781972b512066495dad00096d8317a02465f8b58ba25a
	Nov 19 02:57:55 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:55.632362885Z" level=info msg="Running pod sandbox: default/busybox/POD" id=cf11ccf3-c15a-42b4-8895-ea1f226294a5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:57:55 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:55.632461795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:57:55 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:55.63845725Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c624214a4925e63aac268f421c7d82dcbc8a2b6594dc488b3c7f771f5d3de35b UID:f6d2d599-e7e9-4681-a2aa-6c721027af44 NetNS:/var/run/netns/65906ddd-0cc4-498b-829f-408adf3f9a10 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001498098}] Aliases:map[]}"
	Nov 19 02:57:55 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:55.638501344Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 02:57:55 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:55.651590267Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c624214a4925e63aac268f421c7d82dcbc8a2b6594dc488b3c7f771f5d3de35b UID:f6d2d599-e7e9-4681-a2aa-6c721027af44 NetNS:/var/run/netns/65906ddd-0cc4-498b-829f-408adf3f9a10 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001498098}] Aliases:map[]}"
	Nov 19 02:57:55 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:55.651768683Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 02:57:55 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:55.654563581Z" level=info msg="Ran pod sandbox c624214a4925e63aac268f421c7d82dcbc8a2b6594dc488b3c7f771f5d3de35b with infra container: default/busybox/POD" id=cf11ccf3-c15a-42b4-8895-ea1f226294a5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 02:57:55 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:55.658338013Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=60b347bb-73d0-4e1a-a678-55ec6c7ecb52 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:57:55 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:55.658669138Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=60b347bb-73d0-4e1a-a678-55ec6c7ecb52 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:57:55 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:55.658725883Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=60b347bb-73d0-4e1a-a678-55ec6c7ecb52 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:57:55 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:55.659722639Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e98c90dc-3a98-4295-9cde-0a1b8d6fc956 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:57:55 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:55.661983128Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 02:57:57 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:57.672502559Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=e98c90dc-3a98-4295-9cde-0a1b8d6fc956 name=/runtime.v1.ImageService/PullImage
	Nov 19 02:57:57 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:57.673363597Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=84d20b61-cdad-4374-92b0-1763e8431a84 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 02:57:57 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:57.676078752Z" level=info msg="Creating container: default/busybox/busybox" id=d525123d-ef72-48f1-ad4e-993b350c6131 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:57:57 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:57.676204484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:57:57 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:57.693308259Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:57:57 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:57.693965775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:57:57 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:57.710039787Z" level=info msg="Created container de833d90a780571c9537afffed55fd6cf58bc60f53578a662b418326e42b1074: default/busybox/busybox" id=d525123d-ef72-48f1-ad4e-993b350c6131 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:57:57 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:57.710633624Z" level=info msg="Starting container: de833d90a780571c9537afffed55fd6cf58bc60f53578a662b418326e42b1074" id=23afc16f-1749-4dda-bcff-11f2b852617e name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:57:57 old-k8s-version-525469 crio[837]: time="2025-11-19T02:57:57.712978673Z" level=info msg="Started container" PID=2003 containerID=de833d90a780571c9537afffed55fd6cf58bc60f53578a662b418326e42b1074 description=default/busybox/busybox id=23afc16f-1749-4dda-bcff-11f2b852617e name=/runtime.v1.RuntimeService/StartContainer sandboxID=c624214a4925e63aac268f421c7d82dcbc8a2b6594dc488b3c7f771f5d3de35b
	Nov 19 02:58:04 old-k8s-version-525469 crio[837]: time="2025-11-19T02:58:04.548566898Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	de833d90a7805       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   c624214a4925e       busybox                                          default
	950ff0745dd99       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   612bf19949280       coredns-5dd5756b68-w8wb6                         kube-system
	a2e1eaec2e37c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   5469941c9661c       storage-provisioner                              kube-system
	d731e98b04b03       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   f045bdb67c9d6       kindnet-rj2cj                                    kube-system
	0a90641c9e867       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   1d041597c11ea       kube-proxy-jf89k                                 kube-system
	4f7d69da5eb95       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   6add215c9c24c       kube-scheduler-old-k8s-version-525469            kube-system
	81bfa1164d48e       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   fbf6544e1eca5       kube-apiserver-old-k8s-version-525469            kube-system
	997b830dfea8a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   5531341188f58       etcd-old-k8s-version-525469                      kube-system
	d1935079a3a16       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   13084c8bf827f       kube-controller-manager-old-k8s-version-525469   kube-system
	
	
	==> coredns [950ff0745dd99ee29d1311bd1e32ddec183f822713bef8c73ebba8a7ba69fa2e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54137 - 42334 "HINFO IN 1677107828458217343.658512233425499680. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.003951247s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-525469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-525469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=old-k8s-version-525469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_57_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-525469
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:58:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:57:55 +0000   Wed, 19 Nov 2025 02:57:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:57:55 +0000   Wed, 19 Nov 2025 02:57:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:57:55 +0000   Wed, 19 Nov 2025 02:57:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:57:55 +0000   Wed, 19 Nov 2025 02:57:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-525469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                3d1f45c6-9f00-4378-a685-d971289e6f86
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-w8wb6                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-525469                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-rj2cj                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-525469             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-525469    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-jf89k                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-525469             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node old-k8s-version-525469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-525469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-525469 event: Registered Node old-k8s-version-525469 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-525469 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 02:28] overlayfs: idmapped layers are currently not supported
	[Nov19 02:30] overlayfs: idmapped layers are currently not supported
	[Nov19 02:35] overlayfs: idmapped layers are currently not supported
	[ +37.747558] overlayfs: idmapped layers are currently not supported
	[Nov19 02:37] overlayfs: idmapped layers are currently not supported
	[Nov19 02:38] overlayfs: idmapped layers are currently not supported
	[Nov19 02:39] overlayfs: idmapped layers are currently not supported
	[Nov19 02:41] overlayfs: idmapped layers are currently not supported
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [997b830dfea8a5ab04aca25e4db600e75f4dd008afea68397ab2badbea16a8f5] <==
	{"level":"info","ts":"2025-11-19T02:57:17.497213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-19T02:57:17.497343Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-19T02:57:17.500081Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-19T02:57:17.500467Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-19T02:57:17.50075Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-19T02:57:17.502343Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-19T02:57:17.502649Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-19T02:57:17.669577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-19T02:57:17.669922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-19T02:57:17.669966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-19T02:57:17.670003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-19T02:57:17.670037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-19T02:57:17.67008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-19T02:57:17.670109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-19T02:57:17.672263Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-525469 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T02:57:17.672339Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T02:57:17.677772Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T02:57:17.677926Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:57:17.679603Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T02:57:17.685875Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-19T02:57:17.686043Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T02:57:17.686078Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T02:57:17.686374Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:57:17.68648Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:57:17.705568Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 02:58:06 up 10:40,  0 user,  load average: 2.16, 3.07, 2.48
	Linux old-k8s-version-525469 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d731e98b04b03d1eeff633383694b4338d8b075fce0e9cd0e830ac4c529f784e] <==
	I1119 02:57:41.233383       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:57:41.233966       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 02:57:41.234107       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:57:41.234124       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:57:41.234137       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:57:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:57:41.526117       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:57:41.526157       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:57:41.526167       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:57:41.526318       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 02:57:41.726668       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:57:41.726825       1 metrics.go:72] Registering metrics
	I1119 02:57:41.726932       1 controller.go:711] "Syncing nftables rules"
	I1119 02:57:51.439308       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:57:51.439366       1 main.go:301] handling current node
	I1119 02:58:01.435333       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:58:01.435400       1 main.go:301] handling current node
	
	
	==> kube-apiserver [81bfa1164d48ef2cee992c40229578f45562bb8168203a427635969f38b18099] <==
	I1119 02:57:22.009118       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1119 02:57:22.010552       1 aggregator.go:166] initial CRD sync complete...
	I1119 02:57:22.011513       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 02:57:22.015825       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:57:22.015917       1 cache.go:39] Caches are synced for autoregister controller
	I1119 02:57:22.015368       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 02:57:22.067024       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 02:57:22.107418       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 02:57:22.108768       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1119 02:57:22.109931       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:57:22.613348       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 02:57:22.645003       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 02:57:22.645086       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:57:23.212719       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:57:23.268441       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:57:23.372245       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 02:57:23.386182       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1119 02:57:23.387707       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 02:57:23.393780       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 02:57:24.217197       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 02:57:24.715619       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 02:57:24.731939       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 02:57:24.752886       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1119 02:57:37.525724       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1119 02:57:37.680064       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d1935079a3a16f35442b2fd0be1a5019ef72030a46850dd30bec7ffefd503df0] <==
	I1119 02:57:37.280650       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 02:57:37.541176       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rj2cj"
	I1119 02:57:37.549581       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jf89k"
	I1119 02:57:37.639260       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:57:37.675142       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:57:37.675177       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 02:57:37.687406       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1119 02:57:38.096530       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-w8wb6"
	I1119 02:57:38.120187       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8r8dc"
	I1119 02:57:38.153121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="465.442508ms"
	I1119 02:57:38.187020       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="33.850425ms"
	I1119 02:57:38.226309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="38.853589ms"
	I1119 02:57:38.226435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.457µs"
	I1119 02:57:39.092806       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1119 02:57:39.201786       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-8r8dc"
	I1119 02:57:39.220788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="131.152762ms"
	I1119 02:57:39.241775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.930974ms"
	I1119 02:57:39.281021       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="39.196956ms"
	I1119 02:57:39.282209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.701µs"
	I1119 02:57:51.769627       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="140.074µs"
	I1119 02:57:51.793026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.244µs"
	I1119 02:57:52.075557       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1119 02:57:53.089346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.453µs"
	I1119 02:57:53.137088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.704233ms"
	I1119 02:57:53.137488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.312µs"
	
	
	==> kube-proxy [0a90641c9e867fac7536f9788d3df0ae9aa1c2196460c065028724704664f361] <==
	I1119 02:57:38.926931       1 server_others.go:69] "Using iptables proxy"
	I1119 02:57:39.177945       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1119 02:57:39.314769       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:57:39.321099       1 server_others.go:152] "Using iptables Proxier"
	I1119 02:57:39.321134       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 02:57:39.321143       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 02:57:39.321165       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 02:57:39.321349       1 server.go:846] "Version info" version="v1.28.0"
	I1119 02:57:39.321358       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:57:39.324436       1 config.go:188] "Starting service config controller"
	I1119 02:57:39.324456       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 02:57:39.325263       1 config.go:97] "Starting endpoint slice config controller"
	I1119 02:57:39.325279       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 02:57:39.330193       1 config.go:315] "Starting node config controller"
	I1119 02:57:39.330214       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 02:57:39.425091       1 shared_informer.go:318] Caches are synced for service config
	I1119 02:57:39.426253       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 02:57:39.430270       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4f7d69da5eb951d1dc658de606c71530586f395d3f1b3b4c072cf5a14ee0b92a] <==
	W1119 02:57:22.107974       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1119 02:57:22.108052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1119 02:57:22.108129       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1119 02:57:22.108196       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1119 02:57:22.108243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1119 02:57:22.108396       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1119 02:57:22.109383       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1119 02:57:22.109411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1119 02:57:22.108456       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1119 02:57:22.108493       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1119 02:57:22.108540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1119 02:57:22.107849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1119 02:57:22.109210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1119 02:57:22.109219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1119 02:57:22.109305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1119 02:57:22.109314       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1119 02:57:22.109592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1119 02:57:22.109667       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1119 02:57:22.111763       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1119 02:57:22.111837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1119 02:57:22.958292       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1119 02:57:22.958337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1119 02:57:22.992675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 02:57:22.992782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1119 02:57:23.574485       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 02:57:37 old-k8s-version-525469 kubelet[1380]: I1119 02:57:37.710363    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcgb5\" (UniqueName: \"kubernetes.io/projected/1c8a750f-062c-4851-b070-6733aa4086f8-kube-api-access-wcgb5\") pod \"kube-proxy-jf89k\" (UID: \"1c8a750f-062c-4851-b070-6733aa4086f8\") " pod="kube-system/kube-proxy-jf89k"
	Nov 19 02:57:37 old-k8s-version-525469 kubelet[1380]: I1119 02:57:37.710404    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70f75322-19b2-4cda-8cc2-d016b36e3e78-xtables-lock\") pod \"kindnet-rj2cj\" (UID: \"70f75322-19b2-4cda-8cc2-d016b36e3e78\") " pod="kube-system/kindnet-rj2cj"
	Nov 19 02:57:37 old-k8s-version-525469 kubelet[1380]: I1119 02:57:37.710428    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8pwh\" (UniqueName: \"kubernetes.io/projected/70f75322-19b2-4cda-8cc2-d016b36e3e78-kube-api-access-p8pwh\") pod \"kindnet-rj2cj\" (UID: \"70f75322-19b2-4cda-8cc2-d016b36e3e78\") " pod="kube-system/kindnet-rj2cj"
	Nov 19 02:57:37 old-k8s-version-525469 kubelet[1380]: I1119 02:57:37.710460    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c8a750f-062c-4851-b070-6733aa4086f8-xtables-lock\") pod \"kube-proxy-jf89k\" (UID: \"1c8a750f-062c-4851-b070-6733aa4086f8\") " pod="kube-system/kube-proxy-jf89k"
	Nov 19 02:57:37 old-k8s-version-525469 kubelet[1380]: I1119 02:57:37.710484    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c8a750f-062c-4851-b070-6733aa4086f8-lib-modules\") pod \"kube-proxy-jf89k\" (UID: \"1c8a750f-062c-4851-b070-6733aa4086f8\") " pod="kube-system/kube-proxy-jf89k"
	Nov 19 02:57:37 old-k8s-version-525469 kubelet[1380]: E1119 02:57:37.878694    1380 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 02:57:37 old-k8s-version-525469 kubelet[1380]: E1119 02:57:37.878746    1380 projected.go:198] Error preparing data for projected volume kube-api-access-p8pwh for pod kube-system/kindnet-rj2cj: configmap "kube-root-ca.crt" not found
	Nov 19 02:57:37 old-k8s-version-525469 kubelet[1380]: E1119 02:57:37.878836    1380 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70f75322-19b2-4cda-8cc2-d016b36e3e78-kube-api-access-p8pwh podName:70f75322-19b2-4cda-8cc2-d016b36e3e78 nodeName:}" failed. No retries permitted until 2025-11-19 02:57:38.378798958 +0000 UTC m=+13.696785053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p8pwh" (UniqueName: "kubernetes.io/projected/70f75322-19b2-4cda-8cc2-d016b36e3e78-kube-api-access-p8pwh") pod "kindnet-rj2cj" (UID: "70f75322-19b2-4cda-8cc2-d016b36e3e78") : configmap "kube-root-ca.crt" not found
	Nov 19 02:57:37 old-k8s-version-525469 kubelet[1380]: E1119 02:57:37.929927    1380 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 02:57:37 old-k8s-version-525469 kubelet[1380]: E1119 02:57:37.929959    1380 projected.go:198] Error preparing data for projected volume kube-api-access-wcgb5 for pod kube-system/kube-proxy-jf89k: configmap "kube-root-ca.crt" not found
	Nov 19 02:57:37 old-k8s-version-525469 kubelet[1380]: E1119 02:57:37.930021    1380 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c8a750f-062c-4851-b070-6733aa4086f8-kube-api-access-wcgb5 podName:1c8a750f-062c-4851-b070-6733aa4086f8 nodeName:}" failed. No retries permitted until 2025-11-19 02:57:38.430001627 +0000 UTC m=+13.747987723 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wcgb5" (UniqueName: "kubernetes.io/projected/1c8a750f-062c-4851-b070-6733aa4086f8-kube-api-access-wcgb5") pod "kube-proxy-jf89k" (UID: "1c8a750f-062c-4851-b070-6733aa4086f8") : configmap "kube-root-ca.crt" not found
	Nov 19 02:57:38 old-k8s-version-525469 kubelet[1380]: W1119 02:57:38.773244    1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/crio-1d041597c11eadd61c10282a3a3a94677e858b7c1d371c1c663b47f63c3f89bd WatchSource:0}: Error finding container 1d041597c11eadd61c10282a3a3a94677e858b7c1d371c1c663b47f63c3f89bd: Status 404 returned error can't find the container with id 1d041597c11eadd61c10282a3a3a94677e858b7c1d371c1c663b47f63c3f89bd
	Nov 19 02:57:42 old-k8s-version-525469 kubelet[1380]: I1119 02:57:42.061742    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-rj2cj" podStartSLOduration=2.398896273 podCreationTimestamp="2025-11-19 02:57:37 +0000 UTC" firstStartedPulling="2025-11-19 02:57:38.478024569 +0000 UTC m=+13.796010665" lastFinishedPulling="2025-11-19 02:57:41.140822672 +0000 UTC m=+16.458808768" observedRunningTime="2025-11-19 02:57:42.061393495 +0000 UTC m=+17.379379591" watchObservedRunningTime="2025-11-19 02:57:42.061694376 +0000 UTC m=+17.379680480"
	Nov 19 02:57:42 old-k8s-version-525469 kubelet[1380]: I1119 02:57:42.061877    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jf89k" podStartSLOduration=5.061843599 podCreationTimestamp="2025-11-19 02:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:57:39.074095851 +0000 UTC m=+14.392081946" watchObservedRunningTime="2025-11-19 02:57:42.061843599 +0000 UTC m=+17.379829695"
	Nov 19 02:57:51 old-k8s-version-525469 kubelet[1380]: I1119 02:57:51.731613    1380 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 19 02:57:51 old-k8s-version-525469 kubelet[1380]: I1119 02:57:51.762828    1380 topology_manager.go:215] "Topology Admit Handler" podUID="bd46c0c0-a77e-4f54-ad2c-b9333afd81c6" podNamespace="kube-system" podName="storage-provisioner"
	Nov 19 02:57:51 old-k8s-version-525469 kubelet[1380]: I1119 02:57:51.772030    1380 topology_manager.go:215] "Topology Admit Handler" podUID="214ec4de-7b8c-4e92-b8b7-b548713648ba" podNamespace="kube-system" podName="coredns-5dd5756b68-w8wb6"
	Nov 19 02:57:51 old-k8s-version-525469 kubelet[1380]: I1119 02:57:51.926356    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrnr\" (UniqueName: \"kubernetes.io/projected/bd46c0c0-a77e-4f54-ad2c-b9333afd81c6-kube-api-access-nfrnr\") pod \"storage-provisioner\" (UID: \"bd46c0c0-a77e-4f54-ad2c-b9333afd81c6\") " pod="kube-system/storage-provisioner"
	Nov 19 02:57:51 old-k8s-version-525469 kubelet[1380]: I1119 02:57:51.926422    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/214ec4de-7b8c-4e92-b8b7-b548713648ba-config-volume\") pod \"coredns-5dd5756b68-w8wb6\" (UID: \"214ec4de-7b8c-4e92-b8b7-b548713648ba\") " pod="kube-system/coredns-5dd5756b68-w8wb6"
	Nov 19 02:57:51 old-k8s-version-525469 kubelet[1380]: I1119 02:57:51.926453    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2tlm\" (UniqueName: \"kubernetes.io/projected/214ec4de-7b8c-4e92-b8b7-b548713648ba-kube-api-access-d2tlm\") pod \"coredns-5dd5756b68-w8wb6\" (UID: \"214ec4de-7b8c-4e92-b8b7-b548713648ba\") " pod="kube-system/coredns-5dd5756b68-w8wb6"
	Nov 19 02:57:51 old-k8s-version-525469 kubelet[1380]: I1119 02:57:51.926483    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bd46c0c0-a77e-4f54-ad2c-b9333afd81c6-tmp\") pod \"storage-provisioner\" (UID: \"bd46c0c0-a77e-4f54-ad2c-b9333afd81c6\") " pod="kube-system/storage-provisioner"
	Nov 19 02:57:53 old-k8s-version-525469 kubelet[1380]: I1119 02:57:53.108531    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-w8wb6" podStartSLOduration=15.10848004 podCreationTimestamp="2025-11-19 02:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:57:53.088415331 +0000 UTC m=+28.406401435" watchObservedRunningTime="2025-11-19 02:57:53.10848004 +0000 UTC m=+28.426466136"
	Nov 19 02:57:53 old-k8s-version-525469 kubelet[1380]: I1119 02:57:53.124499    1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.12444628 podCreationTimestamp="2025-11-19 02:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 02:57:53.109315799 +0000 UTC m=+28.427301894" watchObservedRunningTime="2025-11-19 02:57:53.12444628 +0000 UTC m=+28.442432384"
	Nov 19 02:57:55 old-k8s-version-525469 kubelet[1380]: I1119 02:57:55.330254    1380 topology_manager.go:215] "Topology Admit Handler" podUID="f6d2d599-e7e9-4681-a2aa-6c721027af44" podNamespace="default" podName="busybox"
	Nov 19 02:57:55 old-k8s-version-525469 kubelet[1380]: I1119 02:57:55.449836    1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxgql\" (UniqueName: \"kubernetes.io/projected/f6d2d599-e7e9-4681-a2aa-6c721027af44-kube-api-access-pxgql\") pod \"busybox\" (UID: \"f6d2d599-e7e9-4681-a2aa-6c721027af44\") " pod="default/busybox"
	
	
	==> storage-provisioner [a2e1eaec2e37ca09fd0d24f7fd3a743df617660c30e13e719574a490a423961d] <==
	I1119 02:57:52.138818       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:57:52.153168       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:57:52.153217       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 02:57:52.164741       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:57:52.164894       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-525469_67106712-de71-47b6-903e-b2aa385b9aaf!
	I1119 02:57:52.166829       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e9c87b51-c0db-4a20-998e-baae02e74881", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-525469_67106712-de71-47b6-903e-b2aa385b9aaf became leader
	I1119 02:57:52.265553       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-525469_67106712-de71-47b6-903e-b2aa385b9aaf!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-525469 -n old-k8s-version-525469
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-525469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-525469 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-525469 --alsologtostderr -v=1: exit status 80 (2.168632409s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-525469 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:59:19.034842 1647480 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:59:19.035027 1647480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:59:19.035039 1647480 out.go:374] Setting ErrFile to fd 2...
	I1119 02:59:19.035079 1647480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:59:19.035495 1647480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:59:19.035849 1647480 out.go:368] Setting JSON to false
	I1119 02:59:19.035900 1647480 mustload.go:66] Loading cluster: old-k8s-version-525469
	I1119 02:59:19.036376 1647480 config.go:182] Loaded profile config "old-k8s-version-525469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 02:59:19.037032 1647480 cli_runner.go:164] Run: docker container inspect old-k8s-version-525469 --format={{.State.Status}}
	I1119 02:59:19.058318 1647480 host.go:66] Checking if "old-k8s-version-525469" exists ...
	I1119 02:59:19.058654 1647480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:59:19.143475 1647480 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 02:59:19.133238078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:59:19.144167 1647480 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-525469 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 02:59:19.147809 1647480 out.go:179] * Pausing node old-k8s-version-525469 ... 
	I1119 02:59:19.150025 1647480 host.go:66] Checking if "old-k8s-version-525469" exists ...
	I1119 02:59:19.150352 1647480 ssh_runner.go:195] Run: systemctl --version
	I1119 02:59:19.150512 1647480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-525469
	I1119 02:59:19.171104 1647480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34900 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/old-k8s-version-525469/id_rsa Username:docker}
	I1119 02:59:19.288115 1647480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:59:19.303719 1647480 pause.go:52] kubelet running: true
	I1119 02:59:19.303799 1647480 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:59:19.628677 1647480 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:59:19.628777 1647480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:59:19.721332 1647480 cri.go:89] found id: "1e39dc39380cacbe09c4d92d95094596956ef4bdfab3c911455508c2ea032684"
	I1119 02:59:19.721358 1647480 cri.go:89] found id: "53863eb3f6e2f364671dc355ca4cbd8a26a9924dd9beec0d9814d3d6ca0e74fa"
	I1119 02:59:19.721363 1647480 cri.go:89] found id: "60d2a0edd3ab8416b4bc4c9842ab46127629fde5aef3c4a1faea18d3bd15fde4"
	I1119 02:59:19.721367 1647480 cri.go:89] found id: "235acbaf06e664b5de8391a1ab5780f85ac9ba0416c65d86f8dccbfaa51068d1"
	I1119 02:59:19.721371 1647480 cri.go:89] found id: "142e4b24ecf679fdf5439063370dc0b248972ad5b7156c4b11d759ca3eb1a5fb"
	I1119 02:59:19.721374 1647480 cri.go:89] found id: "2f6292580615ce6fb56d365b1c0f4e962165fafffb0c461422541aa1841cfc86"
	I1119 02:59:19.721377 1647480 cri.go:89] found id: "8d62ae4ac8c3bce9aa36e4a78452b16e478ca3633adaf4a4fd9ade5e868e3c78"
	I1119 02:59:19.721381 1647480 cri.go:89] found id: "ff23878d27aac15b322cc8fb8b4fb5e92dfdaae8febb40b85cef9bd65331149b"
	I1119 02:59:19.721384 1647480 cri.go:89] found id: "567aa3846969404af990c3a71e9425667593df67d00ae761c20f8045d800b846"
	I1119 02:59:19.721391 1647480 cri.go:89] found id: "1966d902d6547084dde2f036edcea56b707e50a1b070c2ee7f35ca4118ef27be"
	I1119 02:59:19.721394 1647480 cri.go:89] found id: "5b0855eaad416574d49cfa5dbd17994e2931ec934fc6e5e17bcf06b94186dabd"
	I1119 02:59:19.721397 1647480 cri.go:89] found id: ""
	I1119 02:59:19.721445 1647480 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:59:19.735038 1647480 retry.go:31] will retry after 271.650531ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:59:19Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:59:20.011022 1647480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:59:20.039371 1647480 pause.go:52] kubelet running: false
	I1119 02:59:20.039439 1647480 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:59:20.286152 1647480 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:59:20.286234 1647480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:59:20.378134 1647480 cri.go:89] found id: "1e39dc39380cacbe09c4d92d95094596956ef4bdfab3c911455508c2ea032684"
	I1119 02:59:20.378158 1647480 cri.go:89] found id: "53863eb3f6e2f364671dc355ca4cbd8a26a9924dd9beec0d9814d3d6ca0e74fa"
	I1119 02:59:20.378163 1647480 cri.go:89] found id: "60d2a0edd3ab8416b4bc4c9842ab46127629fde5aef3c4a1faea18d3bd15fde4"
	I1119 02:59:20.378167 1647480 cri.go:89] found id: "235acbaf06e664b5de8391a1ab5780f85ac9ba0416c65d86f8dccbfaa51068d1"
	I1119 02:59:20.378171 1647480 cri.go:89] found id: "142e4b24ecf679fdf5439063370dc0b248972ad5b7156c4b11d759ca3eb1a5fb"
	I1119 02:59:20.378174 1647480 cri.go:89] found id: "2f6292580615ce6fb56d365b1c0f4e962165fafffb0c461422541aa1841cfc86"
	I1119 02:59:20.378178 1647480 cri.go:89] found id: "8d62ae4ac8c3bce9aa36e4a78452b16e478ca3633adaf4a4fd9ade5e868e3c78"
	I1119 02:59:20.378181 1647480 cri.go:89] found id: "ff23878d27aac15b322cc8fb8b4fb5e92dfdaae8febb40b85cef9bd65331149b"
	I1119 02:59:20.378183 1647480 cri.go:89] found id: "567aa3846969404af990c3a71e9425667593df67d00ae761c20f8045d800b846"
	I1119 02:59:20.378191 1647480 cri.go:89] found id: "1966d902d6547084dde2f036edcea56b707e50a1b070c2ee7f35ca4118ef27be"
	I1119 02:59:20.378195 1647480 cri.go:89] found id: "5b0855eaad416574d49cfa5dbd17994e2931ec934fc6e5e17bcf06b94186dabd"
	I1119 02:59:20.378198 1647480 cri.go:89] found id: ""
	I1119 02:59:20.378252 1647480 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:59:20.392657 1647480 retry.go:31] will retry after 443.824892ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:59:20Z" level=error msg="open /run/runc: no such file or directory"
	I1119 02:59:20.837363 1647480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:59:20.850389 1647480 pause.go:52] kubelet running: false
	I1119 02:59:20.850452 1647480 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 02:59:21.029900 1647480 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 02:59:21.029981 1647480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 02:59:21.098611 1647480 cri.go:89] found id: "1e39dc39380cacbe09c4d92d95094596956ef4bdfab3c911455508c2ea032684"
	I1119 02:59:21.098639 1647480 cri.go:89] found id: "53863eb3f6e2f364671dc355ca4cbd8a26a9924dd9beec0d9814d3d6ca0e74fa"
	I1119 02:59:21.098644 1647480 cri.go:89] found id: "60d2a0edd3ab8416b4bc4c9842ab46127629fde5aef3c4a1faea18d3bd15fde4"
	I1119 02:59:21.098649 1647480 cri.go:89] found id: "235acbaf06e664b5de8391a1ab5780f85ac9ba0416c65d86f8dccbfaa51068d1"
	I1119 02:59:21.098652 1647480 cri.go:89] found id: "142e4b24ecf679fdf5439063370dc0b248972ad5b7156c4b11d759ca3eb1a5fb"
	I1119 02:59:21.098656 1647480 cri.go:89] found id: "2f6292580615ce6fb56d365b1c0f4e962165fafffb0c461422541aa1841cfc86"
	I1119 02:59:21.098659 1647480 cri.go:89] found id: "8d62ae4ac8c3bce9aa36e4a78452b16e478ca3633adaf4a4fd9ade5e868e3c78"
	I1119 02:59:21.098662 1647480 cri.go:89] found id: "ff23878d27aac15b322cc8fb8b4fb5e92dfdaae8febb40b85cef9bd65331149b"
	I1119 02:59:21.098665 1647480 cri.go:89] found id: "567aa3846969404af990c3a71e9425667593df67d00ae761c20f8045d800b846"
	I1119 02:59:21.098672 1647480 cri.go:89] found id: "1966d902d6547084dde2f036edcea56b707e50a1b070c2ee7f35ca4118ef27be"
	I1119 02:59:21.098675 1647480 cri.go:89] found id: "5b0855eaad416574d49cfa5dbd17994e2931ec934fc6e5e17bcf06b94186dabd"
	I1119 02:59:21.098678 1647480 cri.go:89] found id: ""
	I1119 02:59:21.098726 1647480 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 02:59:21.112852 1647480 out.go:203] 
	W1119 02:59:21.116105 1647480 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:59:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:59:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 02:59:21.116126 1647480 out.go:285] * 
	* 
	W1119 02:59:21.130179 1647480 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 02:59:21.133383 1647480 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-525469 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-525469
helpers_test.go:243: (dbg) docker inspect old-k8s-version-525469:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9",
	        "Created": "2025-11-19T02:56:56.874847167Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645385,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:58:19.541887016Z",
	            "FinishedAt": "2025-11-19T02:58:18.744809961Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/hosts",
	        "LogPath": "/var/lib/docker/containers/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9-json.log",
	        "Name": "/old-k8s-version-525469",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-525469:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-525469",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9",
	                "LowerDir": "/var/lib/docker/overlay2/6626ee3152a36e280c4cbe358e2f948d8df311fa8c08ac4c768b9ba1c425fba4-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6626ee3152a36e280c4cbe358e2f948d8df311fa8c08ac4c768b9ba1c425fba4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6626ee3152a36e280c4cbe358e2f948d8df311fa8c08ac4c768b9ba1c425fba4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6626ee3152a36e280c4cbe358e2f948d8df311fa8c08ac4c768b9ba1c425fba4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-525469",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-525469/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-525469",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-525469",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-525469",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d598a20c8c3e424a8c1de4fa2aefe8fa85889e349268493fa990af1e68a2a252",
	            "SandboxKey": "/var/run/docker/netns/d598a20c8c3e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34902"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-525469": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:d9:4b:1d:81:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfcb5e1a34a21f833f4806a9351850a2b1b407ff4f69e6c1e4043b73bcdc3f29",
	                    "EndpointID": "e55a62a30164f3f66e59716c5efb40af2627874e92b687b51430e14a05d78525",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-525469",
	                        "8d5d18297d31"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-525469 -n old-k8s-version-525469
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-525469 -n old-k8s-version-525469: exit status 2 (338.529735ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-525469 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-525469 logs -n 25: (1.279350538s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-889743 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo containerd config dump                                                                                                                                                                                                  │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-315505                                                                                                                                                                                                                  │ kubernetes-upgrade-315505 │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:55 UTC │
	│ ssh     │ -p cilium-889743 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo crio config                                                                                                                                                                                                             │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ delete  │ -p cilium-889743                                                                                                                                                                                                                              │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:55 UTC │
	│ start   │ -p force-systemd-env-335811 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-335811  │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p cert-expiration-422184 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-422184    │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:56 UTC │
	│ delete  │ -p force-systemd-env-335811                                                                                                                                                                                                                   │ force-systemd-env-335811  │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p cert-options-702842 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-702842       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ ssh     │ cert-options-702842 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-702842       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ ssh     │ -p cert-options-702842 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-702842       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ delete  │ -p cert-options-702842                                                                                                                                                                                                                        │ cert-options-702842       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-525469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │                     │
	│ stop    │ -p old-k8s-version-525469 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-525469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:58 UTC │
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:59 UTC │
	│ image   │ old-k8s-version-525469 image list --format=json                                                                                                                                                                                               │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ pause   │ -p old-k8s-version-525469 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │                     │
	│ start   │ -p cert-expiration-422184 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-422184    │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:59:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:59:19.141608 1647490 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:59:19.141774 1647490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:59:19.141778 1647490 out.go:374] Setting ErrFile to fd 2...
	I1119 02:59:19.141782 1647490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:59:19.142169 1647490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:59:19.142585 1647490 out.go:368] Setting JSON to false
	I1119 02:59:19.145527 1647490 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38487,"bootTime":1763482673,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 02:59:19.145604 1647490 start.go:143] virtualization:  
	I1119 02:59:19.150703 1647490 out.go:179] * [cert-expiration-422184] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 02:59:19.154154 1647490 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:59:19.154234 1647490 notify.go:221] Checking for updates...
	I1119 02:59:19.157793 1647490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:59:19.160633 1647490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:59:19.163600 1647490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 02:59:19.166427 1647490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 02:59:19.169335 1647490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:59:19.174827 1647490 config.go:182] Loaded profile config "cert-expiration-422184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:59:19.175486 1647490 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:59:19.214209 1647490 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 02:59:19.214318 1647490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:59:19.292011 1647490 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 02:59:19.27714401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:59:19.292097 1647490 docker.go:319] overlay module found
	I1119 02:59:19.295296 1647490 out.go:179] * Using the docker driver based on existing profile
	I1119 02:59:19.298021 1647490 start.go:309] selected driver: docker
	I1119 02:59:19.298031 1647490 start.go:930] validating driver "docker" against &{Name:cert-expiration-422184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-422184 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:59:19.298225 1647490 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:59:19.299069 1647490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:59:19.388719 1647490 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 02:59:19.375560475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:59:19.389030 1647490 cni.go:84] Creating CNI manager for ""
	I1119 02:59:19.389081 1647490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:59:19.389121 1647490 start.go:353] cluster config:
	{Name:cert-expiration-422184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-422184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1119 02:59:19.392784 1647490 out.go:179] * Starting "cert-expiration-422184" primary control-plane node in "cert-expiration-422184" cluster
	I1119 02:59:19.395697 1647490 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:59:19.398691 1647490 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:59:19.401775 1647490 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:59:19.401809 1647490 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 02:59:19.401823 1647490 cache.go:65] Caching tarball of preloaded images
	I1119 02:59:19.401906 1647490 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 02:59:19.401914 1647490 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:59:19.402024 1647490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/cert-expiration-422184/config.json ...
	I1119 02:59:19.402228 1647490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:59:19.433433 1647490 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:59:19.433445 1647490 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:59:19.433456 1647490 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:59:19.433479 1647490 start.go:360] acquireMachinesLock for cert-expiration-422184: {Name:mk32dc7ac9e27f225fa9a24e6855be1b2482a03f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:59:19.433569 1647490 start.go:364] duration metric: took 74.05µs to acquireMachinesLock for "cert-expiration-422184"
	I1119 02:59:19.433588 1647490 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:59:19.433593 1647490 fix.go:54] fixHost starting: 
	I1119 02:59:19.433852 1647490 cli_runner.go:164] Run: docker container inspect cert-expiration-422184 --format={{.State.Status}}
	I1119 02:59:19.464945 1647490 fix.go:112] recreateIfNeeded on cert-expiration-422184: state=Running err=<nil>
	W1119 02:59:19.464973 1647490 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.214608969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.221720052Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.22233642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.239727153Z" level=info msg="Created container 1966d902d6547084dde2f036edcea56b707e50a1b070c2ee7f35ca4118ef27be: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb/dashboard-metrics-scraper" id=9b72fdaa-ed27-4815-b107-bc6d330a16d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.240391224Z" level=info msg="Starting container: 1966d902d6547084dde2f036edcea56b707e50a1b070c2ee7f35ca4118ef27be" id=2cbbd906-ad58-46ac-9406-3c41b3a681de name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.242672636Z" level=info msg="Started container" PID=1643 containerID=1966d902d6547084dde2f036edcea56b707e50a1b070c2ee7f35ca4118ef27be description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb/dashboard-metrics-scraper id=2cbbd906-ad58-46ac-9406-3c41b3a681de name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c124493c779612df52b11e86a3b873bd94abbd7991d805b1fe7e81bb3eb060f
	Nov 19 02:59:05 old-k8s-version-525469 conmon[1641]: conmon 1966d902d6547084dde2 <ninfo>: container 1643 exited with status 1
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.702072583Z" level=info msg="Removing container: d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5" id=836bdb92-ceb2-4192-8af9-72ef57efef36 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.709719766Z" level=info msg="Error loading conmon cgroup of container d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5: cgroup deleted" id=836bdb92-ceb2-4192-8af9-72ef57efef36 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.713366726Z" level=info msg="Removed container d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb/dashboard-metrics-scraper" id=836bdb92-ceb2-4192-8af9-72ef57efef36 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.542356324Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.54645695Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.546490672Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.54651413Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.550284779Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.55031686Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.550341196Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.553929861Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.553962418Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.553993924Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.557313649Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.557341611Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.557364339Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.560723808Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.560754224Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1966d902d6547       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   4c124493c7796       dashboard-metrics-scraper-5f989dc9cf-t72pb       kubernetes-dashboard
	1e39dc39380ca       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   6fa5811dc37c3       storage-provisioner                              kube-system
	5b0855eaad416       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   92714275d6e52       kubernetes-dashboard-8694d4445c-vnbjk            kubernetes-dashboard
	53863eb3f6e2f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           48 seconds ago      Running             coredns                     1                   d86b256bb0482       coredns-5dd5756b68-w8wb6                         kube-system
	bde6e20291e7c       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   7baa4c9240789       busybox                                          default
	60d2a0edd3ab8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   3df30b07a141e       kindnet-rj2cj                                    kube-system
	235acbaf06e66       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           49 seconds ago      Running             kube-proxy                  1                   00cb955a6cf27       kube-proxy-jf89k                                 kube-system
	142e4b24ecf67       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   6fa5811dc37c3       storage-provisioner                              kube-system
	2f6292580615c       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           55 seconds ago      Running             kube-scheduler              1                   23884360a4ee5       kube-scheduler-old-k8s-version-525469            kube-system
	8d62ae4ac8c3b       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           55 seconds ago      Running             kube-controller-manager     1                   b93a827ee700c       kube-controller-manager-old-k8s-version-525469   kube-system
	ff23878d27aac       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           55 seconds ago      Running             etcd                        1                   9bef78315d34b       etcd-old-k8s-version-525469                      kube-system
	567aa38469694       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           55 seconds ago      Running             kube-apiserver              1                   b15c8036a6f1d       kube-apiserver-old-k8s-version-525469            kube-system
	
	
	==> coredns [53863eb3f6e2f364671dc355ca4cbd8a26a9924dd9beec0d9814d3d6ca0e74fa] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53659 - 44166 "HINFO IN 5991072870533325853.6843760316668869777. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00484458s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-525469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-525469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=old-k8s-version-525469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_57_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-525469
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:59:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:59:02 +0000   Wed, 19 Nov 2025 02:57:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:59:02 +0000   Wed, 19 Nov 2025 02:57:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:59:02 +0000   Wed, 19 Nov 2025 02:57:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:59:02 +0000   Wed, 19 Nov 2025 02:57:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-525469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                3d1f45c6-9f00-4378-a685-d971289e6f86
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-w8wb6                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-old-k8s-version-525469                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-rj2cj                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-525469             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-525469    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-jf89k                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-525469             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-t72pb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-vnbjk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-525469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node old-k8s-version-525469 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s                 node-controller  Node old-k8s-version-525469 event: Registered Node old-k8s-version-525469 in Controller
	  Normal  NodeReady                91s                  kubelet          Node old-k8s-version-525469 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node old-k8s-version-525469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                  node-controller  Node old-k8s-version-525469 event: Registered Node old-k8s-version-525469 in Controller
	
	
	==> dmesg <==
	[Nov19 02:30] overlayfs: idmapped layers are currently not supported
	[Nov19 02:35] overlayfs: idmapped layers are currently not supported
	[ +37.747558] overlayfs: idmapped layers are currently not supported
	[Nov19 02:37] overlayfs: idmapped layers are currently not supported
	[Nov19 02:38] overlayfs: idmapped layers are currently not supported
	[Nov19 02:39] overlayfs: idmapped layers are currently not supported
	[Nov19 02:41] overlayfs: idmapped layers are currently not supported
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ff23878d27aac15b322cc8fb8b4fb5e92dfdaae8febb40b85cef9bd65331149b] <==
	{"level":"info","ts":"2025-11-19T02:58:27.197936Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T02:58:27.197947Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T02:58:27.198231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-19T02:58:27.198983Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-19T02:58:27.199548Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:58:27.199642Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:58:27.214244Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-19T02:58:27.2164Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-19T02:58:27.216506Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-19T02:58:27.214454Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-19T02:58:27.216615Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-19T02:58:28.485546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-19T02:58:28.485594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-19T02:58:28.485624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-19T02:58:28.485638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-19T02:58:28.485644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-19T02:58:28.485655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-19T02:58:28.485663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-19T02:58:28.489401Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-525469 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T02:58:28.489469Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T02:58:28.490721Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-19T02:58:28.492103Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T02:58:28.496165Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T02:58:28.492134Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T02:58:28.505602Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 02:59:22 up 10:41,  0 user,  load average: 1.38, 2.63, 2.38
	Linux old-k8s-version-525469 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [60d2a0edd3ab8416b4bc4c9842ab46127629fde5aef3c4a1faea18d3bd15fde4] <==
	I1119 02:58:33.350247       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:58:33.350441       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 02:58:33.350568       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:58:33.350580       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:58:33.350589       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:58:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:58:33.536738       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:58:33.536756       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:58:33.536765       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:58:33.537047       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 02:59:03.537271       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 02:59:03.537419       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 02:59:03.537448       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 02:59:03.537454       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 02:59:05.037848       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:59:05.037888       1 metrics.go:72] Registering metrics
	I1119 02:59:05.037948       1 controller.go:711] "Syncing nftables rules"
	I1119 02:59:13.542026       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:59:13.542071       1 main.go:301] handling current node
	
	
	==> kube-apiserver [567aa3846969404af990c3a71e9425667593df67d00ae761c20f8045d800b846] <==
	I1119 02:58:31.938971       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:58:31.939940       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 02:58:31.965641       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1119 02:58:31.972995       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1119 02:58:31.973369       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1119 02:58:31.975666       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 02:58:31.975853       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1119 02:58:31.975868       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1119 02:58:31.976205       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 02:58:31.983622       1 aggregator.go:166] initial CRD sync complete...
	I1119 02:58:31.983648       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 02:58:31.983656       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:58:31.983663       1 cache.go:39] Caches are synced for autoregister controller
	E1119 02:58:32.018836       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 02:58:32.589678       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:58:33.989747       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 02:58:34.040635       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 02:58:34.067089       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:58:34.079613       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:58:34.090424       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 02:58:34.139747       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.183.3"}
	I1119 02:58:34.167361       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.103.23"}
	I1119 02:58:44.720108       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 02:58:44.819927       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 02:58:44.945790       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8d62ae4ac8c3bce9aa36e4a78452b16e478ca3633adaf4a4fd9ade5e868e3c78] <==
	I1119 02:58:44.883975       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.997µs"
	I1119 02:58:44.886003       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-vnbjk"
	I1119 02:58:44.886028       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-t72pb"
	I1119 02:58:44.902479       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="174.560946ms"
	I1119 02:58:44.913175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="190.124958ms"
	I1119 02:58:44.916988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.367631ms"
	I1119 02:58:44.927144       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="3.989383ms"
	I1119 02:58:44.934174       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="20.933991ms"
	I1119 02:58:44.934386       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="102.823µs"
	I1119 02:58:44.940566       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.463µs"
	I1119 02:58:44.956187       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1119 02:58:44.959731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.942µs"
	I1119 02:58:44.962510       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1119 02:58:44.978859       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:58:44.978900       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 02:58:44.983680       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:58:50.690767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.026547ms"
	I1119 02:58:50.690885       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.58µs"
	I1119 02:58:54.679759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.544µs"
	I1119 02:58:55.696442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.767µs"
	I1119 02:58:56.690017       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.024µs"
	I1119 02:59:05.474809       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.532311ms"
	I1119 02:59:05.475814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.573µs"
	I1119 02:59:05.722623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.25µs"
	I1119 02:59:15.226717       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.481µs"
	
	
	==> kube-proxy [235acbaf06e664b5de8391a1ab5780f85ac9ba0416c65d86f8dccbfaa51068d1] <==
	I1119 02:58:33.470515       1 server_others.go:69] "Using iptables proxy"
	I1119 02:58:33.502955       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1119 02:58:33.549966       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:58:33.558600       1 server_others.go:152] "Using iptables Proxier"
	I1119 02:58:33.558637       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 02:58:33.558644       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 02:58:33.558690       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 02:58:33.558897       1 server.go:846] "Version info" version="v1.28.0"
	I1119 02:58:33.558907       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:58:33.561133       1 config.go:188] "Starting service config controller"
	I1119 02:58:33.561160       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 02:58:33.561182       1 config.go:97] "Starting endpoint slice config controller"
	I1119 02:58:33.561185       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 02:58:33.561985       1 config.go:315] "Starting node config controller"
	I1119 02:58:33.561993       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 02:58:33.662572       1 shared_informer.go:318] Caches are synced for service config
	I1119 02:58:33.662626       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 02:58:33.663838       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [2f6292580615ce6fb56d365b1c0f4e962165fafffb0c461422541aa1841cfc86] <==
	I1119 02:58:29.800816       1 serving.go:348] Generated self-signed cert in-memory
	I1119 02:58:33.069690       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1119 02:58:33.069718       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:58:33.109180       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1119 02:58:33.109285       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1119 02:58:33.109299       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1119 02:58:33.109318       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1119 02:58:33.111228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:58:33.111240       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1119 02:58:33.111255       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:58:33.111260       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1119 02:58:33.209662       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1119 02:58:33.215264       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1119 02:58:33.215312       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 02:58:44 old-k8s-version-525469 kubelet[784]: I1119 02:58:44.891961     784 topology_manager.go:215] "Topology Admit Handler" podUID="e30d552f-4050-41bf-b875-0c95fae03973" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-vnbjk"
	Nov 19 02:58:44 old-k8s-version-525469 kubelet[784]: I1119 02:58:44.907538     784 topology_manager.go:215] "Topology Admit Handler" podUID="3ecb8f41-c60c-4ec7-822b-61adb6b19af0" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-t72pb"
	Nov 19 02:58:44 old-k8s-version-525469 kubelet[784]: I1119 02:58:44.981714     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3ecb8f41-c60c-4ec7-822b-61adb6b19af0-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-t72pb\" (UID: \"3ecb8f41-c60c-4ec7-822b-61adb6b19af0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb"
	Nov 19 02:58:44 old-k8s-version-525469 kubelet[784]: I1119 02:58:44.981771     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e30d552f-4050-41bf-b875-0c95fae03973-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-vnbjk\" (UID: \"e30d552f-4050-41bf-b875-0c95fae03973\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vnbjk"
	Nov 19 02:58:44 old-k8s-version-525469 kubelet[784]: I1119 02:58:44.981801     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c2l7\" (UniqueName: \"kubernetes.io/projected/e30d552f-4050-41bf-b875-0c95fae03973-kube-api-access-2c2l7\") pod \"kubernetes-dashboard-8694d4445c-vnbjk\" (UID: \"e30d552f-4050-41bf-b875-0c95fae03973\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vnbjk"
	Nov 19 02:58:44 old-k8s-version-525469 kubelet[784]: I1119 02:58:44.981826     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk8mz\" (UniqueName: \"kubernetes.io/projected/3ecb8f41-c60c-4ec7-822b-61adb6b19af0-kube-api-access-jk8mz\") pod \"dashboard-metrics-scraper-5f989dc9cf-t72pb\" (UID: \"3ecb8f41-c60c-4ec7-822b-61adb6b19af0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb"
	Nov 19 02:58:45 old-k8s-version-525469 kubelet[784]: W1119 02:58:45.275334     784 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/crio-92714275d6e52fa2b97859bc480838816c262719267ae38405e1340b4528cafb WatchSource:0}: Error finding container 92714275d6e52fa2b97859bc480838816c262719267ae38405e1340b4528cafb: Status 404 returned error can't find the container with id 92714275d6e52fa2b97859bc480838816c262719267ae38405e1340b4528cafb
	Nov 19 02:58:54 old-k8s-version-525469 kubelet[784]: I1119 02:58:54.666393     784 scope.go:117] "RemoveContainer" containerID="2e8e9c5369a7eba3d1664d074e0e2c9344a7617843aec5f76fb0fbccef03ce3b"
	Nov 19 02:58:54 old-k8s-version-525469 kubelet[784]: I1119 02:58:54.683326     784 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vnbjk" podStartSLOduration=5.800547176 podCreationTimestamp="2025-11-19 02:58:44 +0000 UTC" firstStartedPulling="2025-11-19 02:58:45.278634319 +0000 UTC m=+18.943556147" lastFinishedPulling="2025-11-19 02:58:50.160615033 +0000 UTC m=+23.825536861" observedRunningTime="2025-11-19 02:58:50.679150093 +0000 UTC m=+24.344071913" watchObservedRunningTime="2025-11-19 02:58:54.68252789 +0000 UTC m=+28.347449710"
	Nov 19 02:58:55 old-k8s-version-525469 kubelet[784]: I1119 02:58:55.670045     784 scope.go:117] "RemoveContainer" containerID="d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5"
	Nov 19 02:58:55 old-k8s-version-525469 kubelet[784]: I1119 02:58:55.671006     784 scope.go:117] "RemoveContainer" containerID="2e8e9c5369a7eba3d1664d074e0e2c9344a7617843aec5f76fb0fbccef03ce3b"
	Nov 19 02:58:55 old-k8s-version-525469 kubelet[784]: E1119 02:58:55.671649     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t72pb_kubernetes-dashboard(3ecb8f41-c60c-4ec7-822b-61adb6b19af0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb" podUID="3ecb8f41-c60c-4ec7-822b-61adb6b19af0"
	Nov 19 02:58:56 old-k8s-version-525469 kubelet[784]: I1119 02:58:56.674512     784 scope.go:117] "RemoveContainer" containerID="d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5"
	Nov 19 02:58:56 old-k8s-version-525469 kubelet[784]: E1119 02:58:56.674797     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t72pb_kubernetes-dashboard(3ecb8f41-c60c-4ec7-822b-61adb6b19af0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb" podUID="3ecb8f41-c60c-4ec7-822b-61adb6b19af0"
	Nov 19 02:59:03 old-k8s-version-525469 kubelet[784]: I1119 02:59:03.690884     784 scope.go:117] "RemoveContainer" containerID="142e4b24ecf679fdf5439063370dc0b248972ad5b7156c4b11d759ca3eb1a5fb"
	Nov 19 02:59:05 old-k8s-version-525469 kubelet[784]: I1119 02:59:05.211062     784 scope.go:117] "RemoveContainer" containerID="d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5"
	Nov 19 02:59:05 old-k8s-version-525469 kubelet[784]: I1119 02:59:05.700428     784 scope.go:117] "RemoveContainer" containerID="d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5"
	Nov 19 02:59:05 old-k8s-version-525469 kubelet[784]: I1119 02:59:05.700803     784 scope.go:117] "RemoveContainer" containerID="1966d902d6547084dde2f036edcea56b707e50a1b070c2ee7f35ca4118ef27be"
	Nov 19 02:59:05 old-k8s-version-525469 kubelet[784]: E1119 02:59:05.701165     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t72pb_kubernetes-dashboard(3ecb8f41-c60c-4ec7-822b-61adb6b19af0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb" podUID="3ecb8f41-c60c-4ec7-822b-61adb6b19af0"
	Nov 19 02:59:15 old-k8s-version-525469 kubelet[784]: I1119 02:59:15.211522     784 scope.go:117] "RemoveContainer" containerID="1966d902d6547084dde2f036edcea56b707e50a1b070c2ee7f35ca4118ef27be"
	Nov 19 02:59:15 old-k8s-version-525469 kubelet[784]: E1119 02:59:15.211852     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t72pb_kubernetes-dashboard(3ecb8f41-c60c-4ec7-822b-61adb6b19af0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb" podUID="3ecb8f41-c60c-4ec7-822b-61adb6b19af0"
	Nov 19 02:59:19 old-k8s-version-525469 kubelet[784]: I1119 02:59:19.571734     784 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 19 02:59:19 old-k8s-version-525469 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:59:19 old-k8s-version-525469 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:59:19 old-k8s-version-525469 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5b0855eaad416574d49cfa5dbd17994e2931ec934fc6e5e17bcf06b94186dabd] <==
	2025/11/19 02:58:50 Starting overwatch
	2025/11/19 02:58:50 Using namespace: kubernetes-dashboard
	2025/11/19 02:58:50 Using in-cluster config to connect to apiserver
	2025/11/19 02:58:50 Using secret token for csrf signing
	2025/11/19 02:58:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 02:58:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 02:58:50 Successful initial request to the apiserver, version: v1.28.0
	2025/11/19 02:58:50 Generating JWE encryption key
	2025/11/19 02:58:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 02:58:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 02:58:51 Initializing JWE encryption key from synchronized object
	2025/11/19 02:58:51 Creating in-cluster Sidecar client
	2025/11/19 02:58:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:58:51 Serving insecurely on HTTP port: 9090
	2025/11/19 02:59:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [142e4b24ecf679fdf5439063370dc0b248972ad5b7156c4b11d759ca3eb1a5fb] <==
	I1119 02:58:33.268174       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 02:59:03.270478       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [1e39dc39380cacbe09c4d92d95094596956ef4bdfab3c911455508c2ea032684] <==
	I1119 02:59:03.742899       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:59:03.755529       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:59:03.755640       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 02:59:21.157568       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:59:21.157747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-525469_512f5c86-e1ff-42a1-bc21-e0c3d3778a0b!
	I1119 02:59:21.158374       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e9c87b51-c0db-4a20-998e-baae02e74881", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-525469_512f5c86-e1ff-42a1-bc21-e0c3d3778a0b became leader
	I1119 02:59:21.258266       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-525469_512f5c86-e1ff-42a1-bc21-e0c3d3778a0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-525469 -n old-k8s-version-525469
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-525469 -n old-k8s-version-525469: exit status 2 (327.40572ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-525469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-525469
helpers_test.go:243: (dbg) docker inspect old-k8s-version-525469:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9",
	        "Created": "2025-11-19T02:56:56.874847167Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1645385,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:58:19.541887016Z",
	            "FinishedAt": "2025-11-19T02:58:18.744809961Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/hosts",
	        "LogPath": "/var/lib/docker/containers/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9-json.log",
	        "Name": "/old-k8s-version-525469",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-525469:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-525469",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9",
	                "LowerDir": "/var/lib/docker/overlay2/6626ee3152a36e280c4cbe358e2f948d8df311fa8c08ac4c768b9ba1c425fba4-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6626ee3152a36e280c4cbe358e2f948d8df311fa8c08ac4c768b9ba1c425fba4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6626ee3152a36e280c4cbe358e2f948d8df311fa8c08ac4c768b9ba1c425fba4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6626ee3152a36e280c4cbe358e2f948d8df311fa8c08ac4c768b9ba1c425fba4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-525469",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-525469/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-525469",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-525469",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-525469",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d598a20c8c3e424a8c1de4fa2aefe8fa85889e349268493fa990af1e68a2a252",
	            "SandboxKey": "/var/run/docker/netns/d598a20c8c3e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34902"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-525469": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:d9:4b:1d:81:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfcb5e1a34a21f833f4806a9351850a2b1b407ff4f69e6c1e4043b73bcdc3f29",
	                    "EndpointID": "e55a62a30164f3f66e59716c5efb40af2627874e92b687b51430e14a05d78525",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-525469",
	                        "8d5d18297d31"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-525469 -n old-k8s-version-525469
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-525469 -n old-k8s-version-525469: exit status 2 (334.352541ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-525469 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-525469 logs -n 25: (1.234528768s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-889743 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo containerd config dump                                                                                                                                                                                                  │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-315505                                                                                                                                                                                                                  │ kubernetes-upgrade-315505 │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:55 UTC │
	│ ssh     │ -p cilium-889743 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo crio config                                                                                                                                                                                                             │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ delete  │ -p cilium-889743                                                                                                                                                                                                                              │ cilium-889743             │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:55 UTC │
	│ start   │ -p force-systemd-env-335811 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-335811  │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p cert-expiration-422184 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-422184    │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:56 UTC │
	│ delete  │ -p force-systemd-env-335811                                                                                                                                                                                                                   │ force-systemd-env-335811  │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p cert-options-702842 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-702842       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ ssh     │ cert-options-702842 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-702842       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ ssh     │ -p cert-options-702842 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-702842       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ delete  │ -p cert-options-702842                                                                                                                                                                                                                        │ cert-options-702842       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-525469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │                     │
	│ stop    │ -p old-k8s-version-525469 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-525469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:58 UTC │
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:59 UTC │
	│ image   │ old-k8s-version-525469 image list --format=json                                                                                                                                                                                               │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ pause   │ -p old-k8s-version-525469 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-525469    │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │                     │
	│ start   │ -p cert-expiration-422184 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-422184    │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:59:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:59:19.141608 1647490 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:59:19.141774 1647490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:59:19.141778 1647490 out.go:374] Setting ErrFile to fd 2...
	I1119 02:59:19.141782 1647490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:59:19.142169 1647490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:59:19.142585 1647490 out.go:368] Setting JSON to false
	I1119 02:59:19.145527 1647490 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38487,"bootTime":1763482673,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 02:59:19.145604 1647490 start.go:143] virtualization:  
	I1119 02:59:19.150703 1647490 out.go:179] * [cert-expiration-422184] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 02:59:19.154154 1647490 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:59:19.154234 1647490 notify.go:221] Checking for updates...
	I1119 02:59:19.157793 1647490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:59:19.160633 1647490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:59:19.163600 1647490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 02:59:19.166427 1647490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 02:59:19.169335 1647490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:59:19.174827 1647490 config.go:182] Loaded profile config "cert-expiration-422184": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:59:19.175486 1647490 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:59:19.214209 1647490 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 02:59:19.214318 1647490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:59:19.292011 1647490 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 02:59:19.27714401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:59:19.292097 1647490 docker.go:319] overlay module found
	I1119 02:59:19.295296 1647490 out.go:179] * Using the docker driver based on existing profile
	I1119 02:59:19.298021 1647490 start.go:309] selected driver: docker
	I1119 02:59:19.298031 1647490 start.go:930] validating driver "docker" against &{Name:cert-expiration-422184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-422184 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:59:19.298225 1647490 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:59:19.299069 1647490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:59:19.388719 1647490 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 02:59:19.375560475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:59:19.389030 1647490 cni.go:84] Creating CNI manager for ""
	I1119 02:59:19.389081 1647490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:59:19.389121 1647490 start.go:353] cluster config:
	{Name:cert-expiration-422184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-422184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1119 02:59:19.392784 1647490 out.go:179] * Starting "cert-expiration-422184" primary control-plane node in "cert-expiration-422184" cluster
	I1119 02:59:19.395697 1647490 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:59:19.398691 1647490 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:59:19.401775 1647490 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:59:19.401809 1647490 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 02:59:19.401823 1647490 cache.go:65] Caching tarball of preloaded images
	I1119 02:59:19.401906 1647490 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 02:59:19.401914 1647490 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:59:19.402024 1647490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/cert-expiration-422184/config.json ...
	I1119 02:59:19.402228 1647490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:59:19.433433 1647490 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:59:19.433445 1647490 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:59:19.433456 1647490 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:59:19.433479 1647490 start.go:360] acquireMachinesLock for cert-expiration-422184: {Name:mk32dc7ac9e27f225fa9a24e6855be1b2482a03f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:59:19.433569 1647490 start.go:364] duration metric: took 74.05µs to acquireMachinesLock for "cert-expiration-422184"
	I1119 02:59:19.433588 1647490 start.go:96] Skipping create...Using existing machine configuration
	I1119 02:59:19.433593 1647490 fix.go:54] fixHost starting: 
	I1119 02:59:19.433852 1647490 cli_runner.go:164] Run: docker container inspect cert-expiration-422184 --format={{.State.Status}}
	I1119 02:59:19.464945 1647490 fix.go:112] recreateIfNeeded on cert-expiration-422184: state=Running err=<nil>
	W1119 02:59:19.464973 1647490 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.214608969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.221720052Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.22233642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.239727153Z" level=info msg="Created container 1966d902d6547084dde2f036edcea56b707e50a1b070c2ee7f35ca4118ef27be: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb/dashboard-metrics-scraper" id=9b72fdaa-ed27-4815-b107-bc6d330a16d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.240391224Z" level=info msg="Starting container: 1966d902d6547084dde2f036edcea56b707e50a1b070c2ee7f35ca4118ef27be" id=2cbbd906-ad58-46ac-9406-3c41b3a681de name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.242672636Z" level=info msg="Started container" PID=1643 containerID=1966d902d6547084dde2f036edcea56b707e50a1b070c2ee7f35ca4118ef27be description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb/dashboard-metrics-scraper id=2cbbd906-ad58-46ac-9406-3c41b3a681de name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c124493c779612df52b11e86a3b873bd94abbd7991d805b1fe7e81bb3eb060f
	Nov 19 02:59:05 old-k8s-version-525469 conmon[1641]: conmon 1966d902d6547084dde2 <ninfo>: container 1643 exited with status 1
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.702072583Z" level=info msg="Removing container: d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5" id=836bdb92-ceb2-4192-8af9-72ef57efef36 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.709719766Z" level=info msg="Error loading conmon cgroup of container d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5: cgroup deleted" id=836bdb92-ceb2-4192-8af9-72ef57efef36 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:59:05 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:05.713366726Z" level=info msg="Removed container d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb/dashboard-metrics-scraper" id=836bdb92-ceb2-4192-8af9-72ef57efef36 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.542356324Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.54645695Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.546490672Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.54651413Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.550284779Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.55031686Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.550341196Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.553929861Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.553962418Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.553993924Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.557313649Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.557341611Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.557364339Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.560723808Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 02:59:13 old-k8s-version-525469 crio[652]: time="2025-11-19T02:59:13.560754224Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1966d902d6547       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   4c124493c7796       dashboard-metrics-scraper-5f989dc9cf-t72pb       kubernetes-dashboard
	1e39dc39380ca       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   6fa5811dc37c3       storage-provisioner                              kube-system
	5b0855eaad416       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago      Running             kubernetes-dashboard        0                   92714275d6e52       kubernetes-dashboard-8694d4445c-vnbjk            kubernetes-dashboard
	53863eb3f6e2f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago      Running             coredns                     1                   d86b256bb0482       coredns-5dd5756b68-w8wb6                         kube-system
	bde6e20291e7c       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   7baa4c9240789       busybox                                          default
	60d2a0edd3ab8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   3df30b07a141e       kindnet-rj2cj                                    kube-system
	235acbaf06e66       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago      Running             kube-proxy                  1                   00cb955a6cf27       kube-proxy-jf89k                                 kube-system
	142e4b24ecf67       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   6fa5811dc37c3       storage-provisioner                              kube-system
	2f6292580615c       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           57 seconds ago      Running             kube-scheduler              1                   23884360a4ee5       kube-scheduler-old-k8s-version-525469            kube-system
	8d62ae4ac8c3b       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           57 seconds ago      Running             kube-controller-manager     1                   b93a827ee700c       kube-controller-manager-old-k8s-version-525469   kube-system
	ff23878d27aac       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           57 seconds ago      Running             etcd                        1                   9bef78315d34b       etcd-old-k8s-version-525469                      kube-system
	567aa38469694       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           57 seconds ago      Running             kube-apiserver              1                   b15c8036a6f1d       kube-apiserver-old-k8s-version-525469            kube-system
	
	
	==> coredns [53863eb3f6e2f364671dc355ca4cbd8a26a9924dd9beec0d9814d3d6ca0e74fa] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53659 - 44166 "HINFO IN 5991072870533325853.6843760316668869777. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00484458s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-525469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-525469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=old-k8s-version-525469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T02_57_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 02:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-525469
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 02:59:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 02:59:02 +0000   Wed, 19 Nov 2025 02:57:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 02:59:02 +0000   Wed, 19 Nov 2025 02:57:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 02:59:02 +0000   Wed, 19 Nov 2025 02:57:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 02:59:02 +0000   Wed, 19 Nov 2025 02:57:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-525469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                3d1f45c6-9f00-4378-a685-d971289e6f86
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-w8wb6                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-old-k8s-version-525469                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         119s
	  kube-system                 kindnet-rj2cj                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-525469             250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-525469    200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-jf89k                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-525469             100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-t72pb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-vnbjk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-525469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-525469 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-525469 event: Registered Node old-k8s-version-525469 in Controller
	  Normal  NodeReady                93s                  kubelet          Node old-k8s-version-525469 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node old-k8s-version-525469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node old-k8s-version-525469 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                  node-controller  Node old-k8s-version-525469 event: Registered Node old-k8s-version-525469 in Controller
	
	
	==> dmesg <==
	[Nov19 02:30] overlayfs: idmapped layers are currently not supported
	[Nov19 02:35] overlayfs: idmapped layers are currently not supported
	[ +37.747558] overlayfs: idmapped layers are currently not supported
	[Nov19 02:37] overlayfs: idmapped layers are currently not supported
	[Nov19 02:38] overlayfs: idmapped layers are currently not supported
	[Nov19 02:39] overlayfs: idmapped layers are currently not supported
	[Nov19 02:41] overlayfs: idmapped layers are currently not supported
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ff23878d27aac15b322cc8fb8b4fb5e92dfdaae8febb40b85cef9bd65331149b] <==
	{"level":"info","ts":"2025-11-19T02:58:27.197936Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T02:58:27.197947Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T02:58:27.198231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-19T02:58:27.198983Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-19T02:58:27.199548Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:58:27.199642Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T02:58:27.214244Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-19T02:58:27.2164Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-19T02:58:27.216506Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-19T02:58:27.214454Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-19T02:58:27.216615Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-19T02:58:28.485546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-19T02:58:28.485594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-19T02:58:28.485624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-19T02:58:28.485638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-19T02:58:28.485644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-19T02:58:28.485655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-19T02:58:28.485663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-19T02:58:28.489401Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-525469 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T02:58:28.489469Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T02:58:28.490721Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-19T02:58:28.492103Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T02:58:28.496165Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T02:58:28.492134Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T02:58:28.505602Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 02:59:24 up 10:41,  0 user,  load average: 1.35, 2.61, 2.37
	Linux old-k8s-version-525469 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [60d2a0edd3ab8416b4bc4c9842ab46127629fde5aef3c4a1faea18d3bd15fde4] <==
	I1119 02:58:33.350247       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 02:58:33.350441       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 02:58:33.350568       1 main.go:148] setting mtu 1500 for CNI 
	I1119 02:58:33.350580       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 02:58:33.350589       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T02:58:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 02:58:33.536738       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 02:58:33.536756       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 02:58:33.536765       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 02:58:33.537047       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 02:59:03.537271       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 02:59:03.537419       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 02:59:03.537448       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 02:59:03.537454       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 02:59:05.037848       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 02:59:05.037888       1 metrics.go:72] Registering metrics
	I1119 02:59:05.037948       1 controller.go:711] "Syncing nftables rules"
	I1119 02:59:13.542026       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:59:13.542071       1 main.go:301] handling current node
	I1119 02:59:23.541815       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 02:59:23.541846       1 main.go:301] handling current node
	
	
	==> kube-apiserver [567aa3846969404af990c3a71e9425667593df67d00ae761c20f8045d800b846] <==
	I1119 02:58:31.938971       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 02:58:31.939940       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 02:58:31.965641       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1119 02:58:31.972995       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1119 02:58:31.973369       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1119 02:58:31.975666       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 02:58:31.975853       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1119 02:58:31.975868       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1119 02:58:31.976205       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 02:58:31.983622       1 aggregator.go:166] initial CRD sync complete...
	I1119 02:58:31.983648       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 02:58:31.983656       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 02:58:31.983663       1 cache.go:39] Caches are synced for autoregister controller
	E1119 02:58:32.018836       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 02:58:32.589678       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 02:58:33.989747       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 02:58:34.040635       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 02:58:34.067089       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 02:58:34.079613       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 02:58:34.090424       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 02:58:34.139747       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.183.3"}
	I1119 02:58:34.167361       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.103.23"}
	I1119 02:58:44.720108       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 02:58:44.819927       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 02:58:44.945790       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8d62ae4ac8c3bce9aa36e4a78452b16e478ca3633adaf4a4fd9ade5e868e3c78] <==
	I1119 02:58:44.883975       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.997µs"
	I1119 02:58:44.886003       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-vnbjk"
	I1119 02:58:44.886028       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-t72pb"
	I1119 02:58:44.902479       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="174.560946ms"
	I1119 02:58:44.913175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="190.124958ms"
	I1119 02:58:44.916988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.367631ms"
	I1119 02:58:44.927144       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="3.989383ms"
	I1119 02:58:44.934174       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="20.933991ms"
	I1119 02:58:44.934386       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="102.823µs"
	I1119 02:58:44.940566       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.463µs"
	I1119 02:58:44.956187       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1119 02:58:44.959731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.942µs"
	I1119 02:58:44.962510       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1119 02:58:44.978859       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:58:44.978900       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 02:58:44.983680       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 02:58:50.690767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.026547ms"
	I1119 02:58:50.690885       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.58µs"
	I1119 02:58:54.679759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.544µs"
	I1119 02:58:55.696442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.767µs"
	I1119 02:58:56.690017       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.024µs"
	I1119 02:59:05.474809       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.532311ms"
	I1119 02:59:05.475814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.573µs"
	I1119 02:59:05.722623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.25µs"
	I1119 02:59:15.226717       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.481µs"
	
	
	==> kube-proxy [235acbaf06e664b5de8391a1ab5780f85ac9ba0416c65d86f8dccbfaa51068d1] <==
	I1119 02:58:33.470515       1 server_others.go:69] "Using iptables proxy"
	I1119 02:58:33.502955       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1119 02:58:33.549966       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 02:58:33.558600       1 server_others.go:152] "Using iptables Proxier"
	I1119 02:58:33.558637       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 02:58:33.558644       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 02:58:33.558690       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 02:58:33.558897       1 server.go:846] "Version info" version="v1.28.0"
	I1119 02:58:33.558907       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:58:33.561133       1 config.go:188] "Starting service config controller"
	I1119 02:58:33.561160       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 02:58:33.561182       1 config.go:97] "Starting endpoint slice config controller"
	I1119 02:58:33.561185       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 02:58:33.561985       1 config.go:315] "Starting node config controller"
	I1119 02:58:33.561993       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 02:58:33.662572       1 shared_informer.go:318] Caches are synced for service config
	I1119 02:58:33.662626       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 02:58:33.663838       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [2f6292580615ce6fb56d365b1c0f4e962165fafffb0c461422541aa1841cfc86] <==
	I1119 02:58:29.800816       1 serving.go:348] Generated self-signed cert in-memory
	I1119 02:58:33.069690       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1119 02:58:33.069718       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 02:58:33.109180       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1119 02:58:33.109285       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1119 02:58:33.109299       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1119 02:58:33.109318       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1119 02:58:33.111228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 02:58:33.111240       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1119 02:58:33.111255       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 02:58:33.111260       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1119 02:58:33.209662       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1119 02:58:33.215264       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1119 02:58:33.215312       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 02:58:44 old-k8s-version-525469 kubelet[784]: I1119 02:58:44.891961     784 topology_manager.go:215] "Topology Admit Handler" podUID="e30d552f-4050-41bf-b875-0c95fae03973" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-vnbjk"
	Nov 19 02:58:44 old-k8s-version-525469 kubelet[784]: I1119 02:58:44.907538     784 topology_manager.go:215] "Topology Admit Handler" podUID="3ecb8f41-c60c-4ec7-822b-61adb6b19af0" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-t72pb"
	Nov 19 02:58:44 old-k8s-version-525469 kubelet[784]: I1119 02:58:44.981714     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3ecb8f41-c60c-4ec7-822b-61adb6b19af0-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-t72pb\" (UID: \"3ecb8f41-c60c-4ec7-822b-61adb6b19af0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb"
	Nov 19 02:58:44 old-k8s-version-525469 kubelet[784]: I1119 02:58:44.981771     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e30d552f-4050-41bf-b875-0c95fae03973-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-vnbjk\" (UID: \"e30d552f-4050-41bf-b875-0c95fae03973\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vnbjk"
	Nov 19 02:58:44 old-k8s-version-525469 kubelet[784]: I1119 02:58:44.981801     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c2l7\" (UniqueName: \"kubernetes.io/projected/e30d552f-4050-41bf-b875-0c95fae03973-kube-api-access-2c2l7\") pod \"kubernetes-dashboard-8694d4445c-vnbjk\" (UID: \"e30d552f-4050-41bf-b875-0c95fae03973\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vnbjk"
	Nov 19 02:58:44 old-k8s-version-525469 kubelet[784]: I1119 02:58:44.981826     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk8mz\" (UniqueName: \"kubernetes.io/projected/3ecb8f41-c60c-4ec7-822b-61adb6b19af0-kube-api-access-jk8mz\") pod \"dashboard-metrics-scraper-5f989dc9cf-t72pb\" (UID: \"3ecb8f41-c60c-4ec7-822b-61adb6b19af0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb"
	Nov 19 02:58:45 old-k8s-version-525469 kubelet[784]: W1119 02:58:45.275334     784 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8d5d18297d31faedbee0dfbb322407f6bfb9f69bcf20408c0cd6dd14bd8ebca9/crio-92714275d6e52fa2b97859bc480838816c262719267ae38405e1340b4528cafb WatchSource:0}: Error finding container 92714275d6e52fa2b97859bc480838816c262719267ae38405e1340b4528cafb: Status 404 returned error can't find the container with id 92714275d6e52fa2b97859bc480838816c262719267ae38405e1340b4528cafb
	Nov 19 02:58:54 old-k8s-version-525469 kubelet[784]: I1119 02:58:54.666393     784 scope.go:117] "RemoveContainer" containerID="2e8e9c5369a7eba3d1664d074e0e2c9344a7617843aec5f76fb0fbccef03ce3b"
	Nov 19 02:58:54 old-k8s-version-525469 kubelet[784]: I1119 02:58:54.683326     784 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vnbjk" podStartSLOduration=5.800547176 podCreationTimestamp="2025-11-19 02:58:44 +0000 UTC" firstStartedPulling="2025-11-19 02:58:45.278634319 +0000 UTC m=+18.943556147" lastFinishedPulling="2025-11-19 02:58:50.160615033 +0000 UTC m=+23.825536861" observedRunningTime="2025-11-19 02:58:50.679150093 +0000 UTC m=+24.344071913" watchObservedRunningTime="2025-11-19 02:58:54.68252789 +0000 UTC m=+28.347449710"
	Nov 19 02:58:55 old-k8s-version-525469 kubelet[784]: I1119 02:58:55.670045     784 scope.go:117] "RemoveContainer" containerID="d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5"
	Nov 19 02:58:55 old-k8s-version-525469 kubelet[784]: I1119 02:58:55.671006     784 scope.go:117] "RemoveContainer" containerID="2e8e9c5369a7eba3d1664d074e0e2c9344a7617843aec5f76fb0fbccef03ce3b"
	Nov 19 02:58:55 old-k8s-version-525469 kubelet[784]: E1119 02:58:55.671649     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t72pb_kubernetes-dashboard(3ecb8f41-c60c-4ec7-822b-61adb6b19af0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb" podUID="3ecb8f41-c60c-4ec7-822b-61adb6b19af0"
	Nov 19 02:58:56 old-k8s-version-525469 kubelet[784]: I1119 02:58:56.674512     784 scope.go:117] "RemoveContainer" containerID="d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5"
	Nov 19 02:58:56 old-k8s-version-525469 kubelet[784]: E1119 02:58:56.674797     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t72pb_kubernetes-dashboard(3ecb8f41-c60c-4ec7-822b-61adb6b19af0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb" podUID="3ecb8f41-c60c-4ec7-822b-61adb6b19af0"
	Nov 19 02:59:03 old-k8s-version-525469 kubelet[784]: I1119 02:59:03.690884     784 scope.go:117] "RemoveContainer" containerID="142e4b24ecf679fdf5439063370dc0b248972ad5b7156c4b11d759ca3eb1a5fb"
	Nov 19 02:59:05 old-k8s-version-525469 kubelet[784]: I1119 02:59:05.211062     784 scope.go:117] "RemoveContainer" containerID="d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5"
	Nov 19 02:59:05 old-k8s-version-525469 kubelet[784]: I1119 02:59:05.700428     784 scope.go:117] "RemoveContainer" containerID="d1b7f8bfa0b8b231b04f996552d3cabf67c67ad9936519b6492f19e61a2056c5"
	Nov 19 02:59:05 old-k8s-version-525469 kubelet[784]: I1119 02:59:05.700803     784 scope.go:117] "RemoveContainer" containerID="1966d902d6547084dde2f036edcea56b707e50a1b070c2ee7f35ca4118ef27be"
	Nov 19 02:59:05 old-k8s-version-525469 kubelet[784]: E1119 02:59:05.701165     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t72pb_kubernetes-dashboard(3ecb8f41-c60c-4ec7-822b-61adb6b19af0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb" podUID="3ecb8f41-c60c-4ec7-822b-61adb6b19af0"
	Nov 19 02:59:15 old-k8s-version-525469 kubelet[784]: I1119 02:59:15.211522     784 scope.go:117] "RemoveContainer" containerID="1966d902d6547084dde2f036edcea56b707e50a1b070c2ee7f35ca4118ef27be"
	Nov 19 02:59:15 old-k8s-version-525469 kubelet[784]: E1119 02:59:15.211852     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-t72pb_kubernetes-dashboard(3ecb8f41-c60c-4ec7-822b-61adb6b19af0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-t72pb" podUID="3ecb8f41-c60c-4ec7-822b-61adb6b19af0"
	Nov 19 02:59:19 old-k8s-version-525469 kubelet[784]: I1119 02:59:19.571734     784 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 19 02:59:19 old-k8s-version-525469 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 02:59:19 old-k8s-version-525469 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 02:59:19 old-k8s-version-525469 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5b0855eaad416574d49cfa5dbd17994e2931ec934fc6e5e17bcf06b94186dabd] <==
	2025/11/19 02:58:50 Using namespace: kubernetes-dashboard
	2025/11/19 02:58:50 Using in-cluster config to connect to apiserver
	2025/11/19 02:58:50 Using secret token for csrf signing
	2025/11/19 02:58:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 02:58:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 02:58:50 Successful initial request to the apiserver, version: v1.28.0
	2025/11/19 02:58:50 Generating JWE encryption key
	2025/11/19 02:58:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 02:58:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 02:58:51 Initializing JWE encryption key from synchronized object
	2025/11/19 02:58:51 Creating in-cluster Sidecar client
	2025/11/19 02:58:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:58:51 Serving insecurely on HTTP port: 9090
	2025/11/19 02:59:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 02:58:50 Starting overwatch
	
	
	==> storage-provisioner [142e4b24ecf679fdf5439063370dc0b248972ad5b7156c4b11d759ca3eb1a5fb] <==
	I1119 02:58:33.268174       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 02:59:03.270478       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [1e39dc39380cacbe09c4d92d95094596956ef4bdfab3c911455508c2ea032684] <==
	I1119 02:59:03.742899       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 02:59:03.755529       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 02:59:03.755640       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 02:59:21.157568       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 02:59:21.157747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-525469_512f5c86-e1ff-42a1-bc21-e0c3d3778a0b!
	I1119 02:59:21.158374       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e9c87b51-c0db-4a20-998e-baae02e74881", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-525469_512f5c86-e1ff-42a1-bc21-e0c3d3778a0b became leader
	I1119 02:59:21.258266       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-525469_512f5c86-e1ff-42a1-bc21-e0c3d3778a0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-525469 -n old-k8s-version-525469
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-525469 -n old-k8s-version-525469: exit status 2 (341.30513ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-525469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-579203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-579203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (279.565913ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:01:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-579203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-579203 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-579203 describe deploy/metrics-server -n kube-system: exit status 1 (97.442809ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-579203 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-579203
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-579203:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5",
	        "Created": "2025-11-19T02:59:35.831812475Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1650246,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:59:35.900651921Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/hosts",
	        "LogPath": "/var/lib/docker/containers/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5-json.log",
	        "Name": "/default-k8s-diff-port-579203",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-579203:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-579203",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5",
	                "LowerDir": "/var/lib/docker/overlay2/d622a4d4992266276def27975e825f419a488b9d81d50dcaf7f9bc257af61d59-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d622a4d4992266276def27975e825f419a488b9d81d50dcaf7f9bc257af61d59/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d622a4d4992266276def27975e825f419a488b9d81d50dcaf7f9bc257af61d59/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d622a4d4992266276def27975e825f419a488b9d81d50dcaf7f9bc257af61d59/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-579203",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-579203/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-579203",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-579203",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-579203",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d35537a84a4833a083e1e7f42083520aae7840c5d883c0a4da84757728d8287",
	            "SandboxKey": "/var/run/docker/netns/1d35537a84a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34905"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34906"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34909"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34907"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34908"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-579203": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:e4:de:c3:28:ac",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8f7be654242a82c1a39285c06387290e9e449b11aff81f581eff53904d206cfb",
	                    "EndpointID": "e313c088a99671304e3104e33596043ffa2951bed66504d7a1959c8f9d7a515a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-579203",
	                        "d6ecbc325578"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-579203 -n default-k8s-diff-port-579203
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-579203 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-579203 logs -n 25: (1.245220349s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-889743 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-889743                │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-889743                │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ ssh     │ -p cilium-889743 sudo crio config                                                                                                                                                                                                             │ cilium-889743                │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ delete  │ -p cilium-889743                                                                                                                                                                                                                              │ cilium-889743                │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:55 UTC │
	│ start   │ -p force-systemd-env-335811 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-335811     │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p cert-expiration-422184 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:56 UTC │
	│ delete  │ -p force-systemd-env-335811                                                                                                                                                                                                                   │ force-systemd-env-335811     │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p cert-options-702842 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-702842          │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ ssh     │ cert-options-702842 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-702842          │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ ssh     │ -p cert-options-702842 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-702842          │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ delete  │ -p cert-options-702842                                                                                                                                                                                                                        │ cert-options-702842          │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-525469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │                     │
	│ stop    │ -p old-k8s-version-525469 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-525469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:58 UTC │
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:59 UTC │
	│ image   │ old-k8s-version-525469 image list --format=json                                                                                                                                                                                               │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ pause   │ -p old-k8s-version-525469 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │                     │
	│ start   │ -p cert-expiration-422184 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ delete  │ -p cert-expiration-422184                                                                                                                                                                                                                     │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-579203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:59:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:59:41.740900 1651562 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:59:41.741124 1651562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:59:41.741162 1651562 out.go:374] Setting ErrFile to fd 2...
	I1119 02:59:41.741181 1651562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:59:41.741550 1651562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:59:41.742061 1651562 out.go:368] Setting JSON to false
	I1119 02:59:41.743122 1651562 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38509,"bootTime":1763482673,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 02:59:41.743222 1651562 start.go:143] virtualization:  
	I1119 02:59:41.748050 1651562 out.go:179] * [embed-certs-592123] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 02:59:41.751482 1651562 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:59:41.751651 1651562 notify.go:221] Checking for updates...
	I1119 02:59:41.757791 1651562 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:59:41.760880 1651562 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:59:41.764275 1651562 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 02:59:41.767347 1651562 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 02:59:41.770428 1651562 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:59:41.773995 1651562 config.go:182] Loaded profile config "default-k8s-diff-port-579203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:59:41.774108 1651562 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:59:41.814552 1651562 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 02:59:41.814696 1651562 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:59:41.876041 1651562 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 02:59:41.866352719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:59:41.876155 1651562 docker.go:319] overlay module found
	I1119 02:59:41.879324 1651562 out.go:179] * Using the docker driver based on user configuration
	I1119 02:59:41.882243 1651562 start.go:309] selected driver: docker
	I1119 02:59:41.882261 1651562 start.go:930] validating driver "docker" against <nil>
	I1119 02:59:41.882274 1651562 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:59:41.882990 1651562 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:59:41.974310 1651562 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-19 02:59:41.9588122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:59:41.974455 1651562 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 02:59:41.974695 1651562 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:59:41.977603 1651562 out.go:179] * Using Docker driver with root privileges
	I1119 02:59:41.981341 1651562 cni.go:84] Creating CNI manager for ""
	I1119 02:59:41.981418 1651562 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:59:41.981434 1651562 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:59:41.981554 1651562 start.go:353] cluster config:
	{Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:59:41.987135 1651562 out.go:179] * Starting "embed-certs-592123" primary control-plane node in "embed-certs-592123" cluster
	I1119 02:59:41.989994 1651562 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:59:41.992938 1651562 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:59:41.998463 1651562 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:59:41.998515 1651562 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 02:59:41.998526 1651562 cache.go:65] Caching tarball of preloaded images
	I1119 02:59:41.998636 1651562 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 02:59:41.998652 1651562 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:59:41.998767 1651562 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/config.json ...
	I1119 02:59:41.998789 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/config.json: {Name:mk4ec892ed5c5973512217c122e473e16e420a46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:41.998948 1651562 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:59:42.035122 1651562 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:59:42.035147 1651562 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:59:42.035161 1651562 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:59:42.035200 1651562 start.go:360] acquireMachinesLock for embed-certs-592123: {Name:mkad274f419d3f3256db7dae28b742586dc2ebd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:59:42.035313 1651562 start.go:364] duration metric: took 94.897µs to acquireMachinesLock for "embed-certs-592123"
	I1119 02:59:42.035340 1651562 start.go:93] Provisioning new machine with config: &{Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:59:42.035408 1651562 start.go:125] createHost starting for "" (driver="docker")
	I1119 02:59:40.117240 1649559 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-579203
	
	I1119 02:59:40.117259 1649559 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-579203"
	I1119 02:59:40.117323 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:40.136516 1649559 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:40.136865 1649559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34905 <nil> <nil>}
	I1119 02:59:40.136885 1649559 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-579203 && echo "default-k8s-diff-port-579203" | sudo tee /etc/hostname
	I1119 02:59:40.287384 1649559 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-579203
	
	I1119 02:59:40.287458 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:40.308771 1649559 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:40.309081 1649559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34905 <nil> <nil>}
	I1119 02:59:40.309099 1649559 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-579203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-579203/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-579203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:59:40.450022 1649559 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:59:40.450047 1649559 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 02:59:40.450069 1649559 ubuntu.go:190] setting up certificates
	I1119 02:59:40.450077 1649559 provision.go:84] configureAuth start
	I1119 02:59:40.450141 1649559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-579203
	I1119 02:59:40.466967 1649559 provision.go:143] copyHostCerts
	I1119 02:59:40.467036 1649559 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 02:59:40.467049 1649559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 02:59:40.467126 1649559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 02:59:40.467221 1649559 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 02:59:40.467231 1649559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 02:59:40.467258 1649559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 02:59:40.467319 1649559 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 02:59:40.467328 1649559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 02:59:40.467352 1649559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 02:59:40.467406 1649559 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-579203 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-579203 localhost minikube]
	I1119 02:59:40.925159 1649559 provision.go:177] copyRemoteCerts
	I1119 02:59:40.925278 1649559 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:59:40.925354 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:40.948168 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 02:59:41.086276 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 02:59:41.130722 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 02:59:41.165790 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 02:59:41.187940 1649559 provision.go:87] duration metric: took 737.837732ms to configureAuth
	I1119 02:59:41.187974 1649559 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:59:41.188143 1649559 config.go:182] Loaded profile config "default-k8s-diff-port-579203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:59:41.188260 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:41.207545 1649559 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:41.207858 1649559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34905 <nil> <nil>}
	I1119 02:59:41.207879 1649559 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:59:41.576495 1649559 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:59:41.576521 1649559 machine.go:97] duration metric: took 4.64936347s to provisionDockerMachine
	I1119 02:59:41.576532 1649559 client.go:176] duration metric: took 12.188062311s to LocalClient.Create
	I1119 02:59:41.576546 1649559 start.go:167] duration metric: took 12.188178067s to libmachine.API.Create "default-k8s-diff-port-579203"
	I1119 02:59:41.576554 1649559 start.go:293] postStartSetup for "default-k8s-diff-port-579203" (driver="docker")
	I1119 02:59:41.576565 1649559 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:59:41.576632 1649559 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:59:41.576683 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:41.601565 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 02:59:41.703121 1649559 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:59:41.707066 1649559 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:59:41.707092 1649559 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:59:41.707104 1649559 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 02:59:41.707160 1649559 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 02:59:41.707242 1649559 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 02:59:41.707349 1649559 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:59:41.715621 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:59:41.734958 1649559 start.go:296] duration metric: took 158.389168ms for postStartSetup
	I1119 02:59:41.735324 1649559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-579203
	I1119 02:59:41.759529 1649559 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/config.json ...
	I1119 02:59:41.759804 1649559 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:59:41.759853 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:41.784522 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 02:59:41.889723 1649559 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:59:41.895092 1649559 start.go:128] duration metric: took 12.510269309s to createHost
	I1119 02:59:41.895114 1649559 start.go:83] releasing machines lock for "default-k8s-diff-port-579203", held for 12.510385408s
	I1119 02:59:41.895193 1649559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-579203
	I1119 02:59:41.929812 1649559 ssh_runner.go:195] Run: cat /version.json
	I1119 02:59:41.929863 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:41.930105 1649559 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:59:41.930163 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:41.961605 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 02:59:41.980591 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 02:59:42.218148 1649559 ssh_runner.go:195] Run: systemctl --version
	I1119 02:59:42.226886 1649559 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:59:42.290739 1649559 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:59:42.302418 1649559 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:59:42.302502 1649559 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:59:42.353376 1649559 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 02:59:42.353417 1649559 start.go:496] detecting cgroup driver to use...
	I1119 02:59:42.353452 1649559 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 02:59:42.353536 1649559 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:59:42.389137 1649559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:59:42.411043 1649559 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:59:42.411110 1649559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:59:42.431153 1649559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:59:42.451617 1649559 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:59:42.610929 1649559 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:59:42.795380 1649559 docker.go:234] disabling docker service ...
	I1119 02:59:42.795452 1649559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:59:42.819479 1649559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:59:42.845503 1649559 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:59:42.990136 1649559 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:59:43.172456 1649559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:59:43.186026 1649559 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:59:43.199832 1649559 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:59:43.199896 1649559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.208277 1649559 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 02:59:43.208339 1649559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.216698 1649559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.224481 1649559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.232790 1649559 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:59:43.240475 1649559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.248962 1649559 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.261409 1649559 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.269762 1649559 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:59:43.277282 1649559 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:59:43.284690 1649559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:59:43.422190 1649559 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:59:43.817016 1649559 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:59:43.817138 1649559 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:59:43.821723 1649559 start.go:564] Will wait 60s for crictl version
	I1119 02:59:43.821838 1649559 ssh_runner.go:195] Run: which crictl
	I1119 02:59:43.826487 1649559 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:59:43.855888 1649559 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:59:43.856059 1649559 ssh_runner.go:195] Run: crio --version
	I1119 02:59:43.890100 1649559 ssh_runner.go:195] Run: crio --version
	I1119 02:59:43.926386 1649559 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:59:42.038952 1651562 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:59:42.039230 1651562 start.go:159] libmachine.API.Create for "embed-certs-592123" (driver="docker")
	I1119 02:59:42.039266 1651562 client.go:173] LocalClient.Create starting
	I1119 02:59:42.039326 1651562 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem
	I1119 02:59:42.039358 1651562 main.go:143] libmachine: Decoding PEM data...
	I1119 02:59:42.039376 1651562 main.go:143] libmachine: Parsing certificate...
	I1119 02:59:42.039432 1651562 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem
	I1119 02:59:42.039452 1651562 main.go:143] libmachine: Decoding PEM data...
	I1119 02:59:42.039462 1651562 main.go:143] libmachine: Parsing certificate...
	I1119 02:59:42.039830 1651562 cli_runner.go:164] Run: docker network inspect embed-certs-592123 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:59:42.057249 1651562 cli_runner.go:211] docker network inspect embed-certs-592123 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:59:42.057330 1651562 network_create.go:284] running [docker network inspect embed-certs-592123] to gather additional debugging logs...
	I1119 02:59:42.057368 1651562 cli_runner.go:164] Run: docker network inspect embed-certs-592123
	W1119 02:59:42.079920 1651562 cli_runner.go:211] docker network inspect embed-certs-592123 returned with exit code 1
	I1119 02:59:42.079958 1651562 network_create.go:287] error running [docker network inspect embed-certs-592123]: docker network inspect embed-certs-592123: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-592123 not found
	I1119 02:59:42.079983 1651562 network_create.go:289] output of [docker network inspect embed-certs-592123]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-592123 not found
	
	** /stderr **
	I1119 02:59:42.080120 1651562 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:59:42.104421 1651562 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-30778cc553ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:62:24:59:d9:05:e6} reservation:<nil>}
	I1119 02:59:42.104846 1651562 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-564f8befa544 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:bb:c9:f1:3d:0c} reservation:<nil>}
	I1119 02:59:42.105092 1651562 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fccf9ce7bac2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:92:9c:a6:ca:f9:d9} reservation:<nil>}
	I1119 02:59:42.105655 1651562 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400195f490}
	I1119 02:59:42.105688 1651562 network_create.go:124] attempt to create docker network embed-certs-592123 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 02:59:42.105753 1651562 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-592123 embed-certs-592123
	I1119 02:59:42.200247 1651562 network_create.go:108] docker network embed-certs-592123 192.168.76.0/24 created
	I1119 02:59:42.200282 1651562 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-592123" container
	I1119 02:59:42.200392 1651562 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:59:42.240539 1651562 cli_runner.go:164] Run: docker volume create embed-certs-592123 --label name.minikube.sigs.k8s.io=embed-certs-592123 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:59:42.267200 1651562 oci.go:103] Successfully created a docker volume embed-certs-592123
	I1119 02:59:42.267296 1651562 cli_runner.go:164] Run: docker run --rm --name embed-certs-592123-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-592123 --entrypoint /usr/bin/test -v embed-certs-592123:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:59:42.945845 1651562 oci.go:107] Successfully prepared a docker volume embed-certs-592123
	I1119 02:59:42.945927 1651562 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:59:42.945936 1651562 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:59:42.945996 1651562 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-592123:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 02:59:43.930772 1649559 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-579203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:59:43.952088 1649559 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 02:59:43.955890 1649559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:59:43.965152 1649559 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-579203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-579203 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:59:43.965270 1649559 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:59:43.965327 1649559 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:59:44.007341 1649559 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:59:44.007366 1649559 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:59:44.007429 1649559 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:59:44.039862 1649559 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:59:44.039884 1649559 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:59:44.039893 1649559 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1119 02:59:44.039977 1649559 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-579203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-579203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:59:44.040070 1649559 ssh_runner.go:195] Run: crio config
	I1119 02:59:44.127423 1649559 cni.go:84] Creating CNI manager for ""
	I1119 02:59:44.127499 1649559 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:59:44.127537 1649559 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:59:44.127590 1649559 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-579203 NodeName:default-k8s-diff-port-579203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:59:44.127774 1649559 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-579203"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:59:44.127899 1649559 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:59:44.137189 1649559 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:59:44.137356 1649559 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:59:44.146510 1649559 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 02:59:44.161653 1649559 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:59:44.177385 1649559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1119 02:59:44.192277 1649559 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:59:44.196311 1649559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:59:44.206582 1649559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:59:44.347147 1649559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:59:44.366631 1649559 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203 for IP: 192.168.85.2
	I1119 02:59:44.366709 1649559 certs.go:195] generating shared ca certs ...
	I1119 02:59:44.366741 1649559 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:44.366920 1649559 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 02:59:44.367012 1649559 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 02:59:44.367051 1649559 certs.go:257] generating profile certs ...
	I1119 02:59:44.367157 1649559 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.key
	I1119 02:59:44.367205 1649559 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.crt with IP's: []
	I1119 02:59:44.444784 1649559 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.crt ...
	I1119 02:59:44.444816 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.crt: {Name:mk92599fd834df9a9a71b04def0100ad1241cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:44.444985 1649559 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.key ...
	I1119 02:59:44.445009 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.key: {Name:mk6aac44a638809967825a8694552699a0c25c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:44.445091 1649559 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key.1f3db3c7
	I1119 02:59:44.445110 1649559 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt.1f3db3c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 02:59:46.107868 1649559 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt.1f3db3c7 ...
	I1119 02:59:46.107899 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt.1f3db3c7: {Name:mkfe293518e71306c7a9d56cd9d3176e4fdd2703 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:46.108099 1649559 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key.1f3db3c7 ...
	I1119 02:59:46.108114 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key.1f3db3c7: {Name:mkd3825ea6d471bcaa422da590b0ccab060081a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:46.108206 1649559 certs.go:382] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt.1f3db3c7 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt
	I1119 02:59:46.108283 1649559 certs.go:386] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key.1f3db3c7 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key
	I1119 02:59:46.108350 1649559 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.key
	I1119 02:59:46.108375 1649559 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.crt with IP's: []
	I1119 02:59:46.860144 1649559 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.crt ...
	I1119 02:59:46.860182 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.crt: {Name:mkaf4a5e599eba2e347a1d222f3437cd3bcba1f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:46.860381 1649559 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.key ...
	I1119 02:59:46.860398 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.key: {Name:mkeb2a002b8da01b8f2d13893e78203ac4177a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:46.860589 1649559 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 02:59:46.860632 1649559 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 02:59:46.860646 1649559 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 02:59:46.860674 1649559 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 02:59:46.860701 1649559 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:59:46.860728 1649559 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 02:59:46.860774 1649559 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:59:46.861379 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:59:46.880370 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:59:46.898359 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:59:46.916778 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:59:46.933828 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 02:59:46.950398 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 02:59:46.968124 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:59:46.986753 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:59:47.006617 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 02:59:47.035869 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:59:47.054867 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 02:59:47.073224 1649559 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:59:47.087831 1649559 ssh_runner.go:195] Run: openssl version
	I1119 02:59:47.094350 1649559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 02:59:47.102433 1649559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 02:59:47.105979 1649559 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 02:59:47.106039 1649559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 02:59:47.146503 1649559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:59:47.155266 1649559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:59:47.163530 1649559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:59:47.167239 1649559 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:59:47.167311 1649559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:59:47.207924 1649559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:59:47.216090 1649559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 02:59:47.224062 1649559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 02:59:47.228793 1649559 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 02:59:47.228853 1649559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 02:59:47.269583 1649559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 02:59:47.278128 1649559 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:59:47.281415 1649559 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:59:47.281466 1649559 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-579203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-579203 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:59:47.281565 1649559 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:59:47.281629 1649559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:59:47.307907 1649559 cri.go:89] found id: ""
	I1119 02:59:47.308023 1649559 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:59:47.316305 1649559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:59:47.324446 1649559 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:59:47.324505 1649559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:59:47.332956 1649559 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:59:47.332983 1649559 kubeadm.go:158] found existing configuration files:
	
	I1119 02:59:47.333062 1649559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 02:59:47.341155 1649559 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:59:47.341257 1649559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:59:47.348517 1649559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 02:59:47.356093 1649559 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:59:47.356157 1649559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:59:47.363400 1649559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 02:59:47.370785 1649559 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:59:47.370902 1649559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:59:47.378184 1649559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 02:59:47.386252 1649559 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:59:47.386335 1649559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:59:47.394672 1649559 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:59:47.434716 1649559 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:59:47.434800 1649559 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:59:47.459317 1649559 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:59:47.459455 1649559 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 02:59:47.459531 1649559 kubeadm.go:319] OS: Linux
	I1119 02:59:47.459605 1649559 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:59:47.459681 1649559 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 02:59:47.459761 1649559 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:59:47.459836 1649559 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:59:47.459914 1649559 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:59:47.460015 1649559 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:59:47.460097 1649559 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:59:47.460182 1649559 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:59:47.460267 1649559 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 02:59:47.536596 1649559 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:59:47.536738 1649559 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:59:47.536855 1649559 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:59:47.546973 1649559 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:59:47.554165 1649559 out.go:252]   - Generating certificates and keys ...
	I1119 02:59:47.554318 1649559 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:59:47.554437 1649559 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:59:48.484432 1649559 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:59:47.564641 1651562 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-592123:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.618610537s)
	I1119 02:59:47.564682 1651562 kic.go:203] duration metric: took 4.618729745s to extract preloaded images to volume ...
	W1119 02:59:47.564813 1651562 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 02:59:47.564914 1651562 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:59:47.652434 1651562 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-592123 --name embed-certs-592123 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-592123 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-592123 --network embed-certs-592123 --ip 192.168.76.2 --volume embed-certs-592123:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:59:48.018541 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Running}}
	I1119 02:59:48.049965 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 02:59:48.084262 1651562 cli_runner.go:164] Run: docker exec embed-certs-592123 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:59:48.161231 1651562 oci.go:144] the created container "embed-certs-592123" has a running status.
	I1119 02:59:48.161258 1651562 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa...
	I1119 02:59:48.218622 1651562 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:59:48.247775 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 02:59:48.274162 1651562 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:59:48.274196 1651562 kic_runner.go:114] Args: [docker exec --privileged embed-certs-592123 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:59:48.336036 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 02:59:48.358052 1651562 machine.go:94] provisionDockerMachine start ...
	I1119 02:59:48.358154 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:48.380852 1651562 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:48.381185 1651562 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34910 <nil> <nil>}
	I1119 02:59:48.381241 1651562 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:59:48.382060 1651562 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 02:59:51.529615 1651562 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-592123
	
	I1119 02:59:51.529641 1651562 ubuntu.go:182] provisioning hostname "embed-certs-592123"
	I1119 02:59:51.529738 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:51.551525 1651562 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:51.551862 1651562 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34910 <nil> <nil>}
	I1119 02:59:51.551878 1651562 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-592123 && echo "embed-certs-592123" | sudo tee /etc/hostname
	I1119 02:59:51.723783 1651562 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-592123
	
	I1119 02:59:51.723927 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:50.758342 1649559 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:59:51.441906 1649559 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:59:51.804424 1649559 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:59:52.017978 1649559 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:59:52.018154 1649559 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-579203 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 02:59:52.450964 1649559 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:59:52.451137 1649559 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-579203 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 02:59:53.370320 1649559 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:59:51.745456 1651562 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:51.745839 1651562 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34910 <nil> <nil>}
	I1119 02:59:51.745865 1651562 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-592123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-592123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-592123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:59:51.894530 1651562 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:59:51.894556 1651562 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 02:59:51.894585 1651562 ubuntu.go:190] setting up certificates
	I1119 02:59:51.894599 1651562 provision.go:84] configureAuth start
	I1119 02:59:51.894671 1651562 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-592123
	I1119 02:59:51.918123 1651562 provision.go:143] copyHostCerts
	I1119 02:59:51.918201 1651562 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 02:59:51.918221 1651562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 02:59:51.918302 1651562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 02:59:51.918408 1651562 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 02:59:51.918420 1651562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 02:59:51.918458 1651562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 02:59:51.918534 1651562 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 02:59:51.918544 1651562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 02:59:51.918572 1651562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 02:59:51.918638 1651562 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.embed-certs-592123 san=[127.0.0.1 192.168.76.2 embed-certs-592123 localhost minikube]
	I1119 02:59:52.725258 1651562 provision.go:177] copyRemoteCerts
	I1119 02:59:52.725333 1651562 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:59:52.725383 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:52.744424 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 02:59:52.854240 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 02:59:52.874211 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 02:59:52.894524 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 02:59:52.914060 1651562 provision.go:87] duration metric: took 1.019439868s to configureAuth
	I1119 02:59:52.914091 1651562 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:59:52.914279 1651562 config.go:182] Loaded profile config "embed-certs-592123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:59:52.914394 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:52.935758 1651562 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:52.936099 1651562 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34910 <nil> <nil>}
	I1119 02:59:52.936121 1651562 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:59:53.266666 1651562 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:59:53.266692 1651562 machine.go:97] duration metric: took 4.908617658s to provisionDockerMachine
	I1119 02:59:53.266708 1651562 client.go:176] duration metric: took 11.227428755s to LocalClient.Create
	I1119 02:59:53.266722 1651562 start.go:167] duration metric: took 11.227493811s to libmachine.API.Create "embed-certs-592123"
	I1119 02:59:53.266729 1651562 start.go:293] postStartSetup for "embed-certs-592123" (driver="docker")
	I1119 02:59:53.266739 1651562 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:59:53.266813 1651562 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:59:53.266872 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:53.290378 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 02:59:53.402677 1651562 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:59:53.406643 1651562 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:59:53.406670 1651562 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:59:53.406680 1651562 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 02:59:53.406744 1651562 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 02:59:53.406821 1651562 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 02:59:53.406933 1651562 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:59:53.415561 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:59:53.443785 1651562 start.go:296] duration metric: took 177.039957ms for postStartSetup
	I1119 02:59:53.444198 1651562 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-592123
	I1119 02:59:53.467148 1651562 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/config.json ...
	I1119 02:59:53.467433 1651562 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:59:53.467485 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:53.483401 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 02:59:53.582621 1651562 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:59:53.587474 1651562 start.go:128] duration metric: took 11.552050892s to createHost
	I1119 02:59:53.587498 1651562 start.go:83] releasing machines lock for "embed-certs-592123", held for 11.552176436s
	I1119 02:59:53.587568 1651562 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-592123
	I1119 02:59:53.608785 1651562 ssh_runner.go:195] Run: cat /version.json
	I1119 02:59:53.608837 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:53.612617 1651562 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:59:53.612689 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:53.627558 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 02:59:53.647100 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 02:59:53.742009 1651562 ssh_runner.go:195] Run: systemctl --version
	I1119 02:59:53.850125 1651562 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:59:53.894116 1651562 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:59:53.898932 1651562 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:59:53.899001 1651562 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:59:53.943573 1651562 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 02:59:53.943598 1651562 start.go:496] detecting cgroup driver to use...
	I1119 02:59:53.943630 1651562 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 02:59:53.943691 1651562 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:59:53.971657 1651562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:59:53.990258 1651562 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:59:53.990374 1651562 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:59:54.017248 1651562 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:59:54.048368 1651562 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:59:54.236533 1651562 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:59:54.425139 1651562 docker.go:234] disabling docker service ...
	I1119 02:59:54.425293 1651562 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:59:54.456671 1651562 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:59:54.474619 1651562 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:59:54.629891 1651562 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:59:54.790625 1651562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:59:54.807456 1651562 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:59:54.830612 1651562 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:59:54.830719 1651562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.844226 1651562 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 02:59:54.844322 1651562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.854066 1651562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.864548 1651562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.879922 1651562 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:59:54.891340 1651562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.900214 1651562 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.914372 1651562 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.923043 1651562 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:59:54.931275 1651562 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:59:54.939097 1651562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:59:55.089684 1651562 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:59:55.281373 1651562 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:59:55.281496 1651562 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:59:55.285785 1651562 start.go:564] Will wait 60s for crictl version
	I1119 02:59:55.285902 1651562 ssh_runner.go:195] Run: which crictl
	I1119 02:59:55.289964 1651562 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:59:55.314003 1651562 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:59:55.314162 1651562 ssh_runner.go:195] Run: crio --version
	I1119 02:59:55.347595 1651562 ssh_runner.go:195] Run: crio --version
	I1119 02:59:55.385140 1651562 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:59:55.387878 1651562 cli_runner.go:164] Run: docker network inspect embed-certs-592123 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:59:55.403037 1651562 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 02:59:55.406910 1651562 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:59:55.416437 1651562 kubeadm.go:884] updating cluster {Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:59:55.416551 1651562 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:59:55.416605 1651562 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:59:55.449306 1651562 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:59:55.449332 1651562 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:59:55.449384 1651562 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:59:55.485101 1651562 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:59:55.485125 1651562 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:59:55.485134 1651562 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 02:59:55.485224 1651562 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-592123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:59:55.485332 1651562 ssh_runner.go:195] Run: crio config
	I1119 02:59:55.550342 1651562 cni.go:84] Creating CNI manager for ""
	I1119 02:59:55.550382 1651562 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:59:55.550399 1651562 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:59:55.550421 1651562 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-592123 NodeName:embed-certs-592123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:59:55.550564 1651562 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-592123"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:59:55.550649 1651562 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:59:55.558545 1651562 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:59:55.558628 1651562 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:59:55.565842 1651562 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 02:59:55.578288 1651562 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:59:55.590460 1651562 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1119 02:59:55.602981 1651562 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:59:55.606855 1651562 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:59:55.615773 1651562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:59:55.762446 1651562 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:59:55.778903 1651562 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123 for IP: 192.168.76.2
	I1119 02:59:55.778926 1651562 certs.go:195] generating shared ca certs ...
	I1119 02:59:55.778943 1651562 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:55.779073 1651562 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 02:59:55.779131 1651562 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 02:59:55.779143 1651562 certs.go:257] generating profile certs ...
	I1119 02:59:55.779198 1651562 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.key
	I1119 02:59:55.779214 1651562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.crt with IP's: []
	I1119 02:59:56.082578 1651562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.crt ...
	I1119 02:59:56.082612 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.crt: {Name:mka0659fa46018fedd2261c7d014a8963c3aeb74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:56.082885 1651562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.key ...
	I1119 02:59:56.082902 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.key: {Name:mkaee2d4223d2050f5c8f6cd0f214ebf899b8e7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:56.083055 1651562 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key.9c644e00
	I1119 02:59:56.083090 1651562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt.9c644e00 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 02:59:56.498350 1651562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt.9c644e00 ...
	I1119 02:59:56.498384 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt.9c644e00: {Name:mkbe1588db19fd4b9250e65d26caa9c047847860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:56.498640 1651562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key.9c644e00 ...
	I1119 02:59:56.498659 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key.9c644e00: {Name:mk7f5da4191a12d32f058ab85ca1df365e79b208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:56.498799 1651562 certs.go:382] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt.9c644e00 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt
	I1119 02:59:56.498922 1651562 certs.go:386] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key.9c644e00 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key
	I1119 02:59:56.499009 1651562 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.key
	I1119 02:59:56.499044 1651562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.crt with IP's: []
	I1119 02:59:56.787218 1651562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.crt ...
	I1119 02:59:56.787251 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.crt: {Name:mk0e6b936f5feee524ae96f54d40ee87bb1477d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:56.787505 1651562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.key ...
	I1119 02:59:56.787536 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.key: {Name:mkf09d1cf393e5fa0d0545e06e358f2ba7929abd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:56.787776 1651562 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 02:59:56.787840 1651562 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 02:59:56.787856 1651562 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 02:59:56.787896 1651562 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 02:59:56.787941 1651562 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:59:56.787975 1651562 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 02:59:56.788044 1651562 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:59:56.788712 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:59:56.808142 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:59:56.824739 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:59:56.842110 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:59:56.859350 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 02:59:56.881066 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:59:56.900070 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:59:56.918691 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:59:56.941952 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 02:59:56.961173 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 02:59:56.980408 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:59:56.999414 1651562 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:59:57.015694 1651562 ssh_runner.go:195] Run: openssl version
	I1119 02:59:57.022985 1651562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 02:59:57.032129 1651562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 02:59:57.036320 1651562 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 02:59:57.036389 1651562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 02:59:57.077920 1651562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 02:59:57.086972 1651562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 02:59:57.095706 1651562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 02:59:57.099734 1651562 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 02:59:57.099817 1651562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 02:59:57.141323 1651562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:59:57.150481 1651562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:59:57.159613 1651562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:59:57.164117 1651562 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:59:57.164211 1651562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:59:57.205823 1651562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:59:57.214890 1651562 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:59:57.219228 1651562 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:59:57.219290 1651562 kubeadm.go:401] StartCluster: {Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:59:57.219365 1651562 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:59:57.219445 1651562 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:59:57.246728 1651562 cri.go:89] found id: ""
	I1119 02:59:57.246807 1651562 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:59:57.257017 1651562 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:59:57.269205 1651562 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:59:57.269282 1651562 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:59:57.283023 1651562 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:59:57.283044 1651562 kubeadm.go:158] found existing configuration files:
	
	I1119 02:59:57.283114 1651562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:59:57.293998 1651562 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:59:57.294071 1651562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:59:57.301096 1651562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:59:57.311769 1651562 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:59:57.311876 1651562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:59:57.326682 1651562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:59:57.335217 1651562 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:59:57.335296 1651562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:59:57.343438 1651562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:59:57.352524 1651562 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:59:57.352606 1651562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:59:57.360645 1651562 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:59:57.418099 1651562 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:59:57.418506 1651562 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:59:57.490935 1651562 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:59:57.491016 1651562 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 02:59:57.491070 1651562 kubeadm.go:319] OS: Linux
	I1119 02:59:57.491123 1651562 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:59:57.491178 1651562 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 02:59:57.491234 1651562 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:59:57.491288 1651562 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:59:57.491343 1651562 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:59:57.491398 1651562 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:59:57.491449 1651562 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:59:57.491504 1651562 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:59:57.491555 1651562 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 02:59:57.601488 1651562 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:59:57.601621 1651562 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:59:57.601718 1651562 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:59:57.609979 1651562 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:59:53.965670 1649559 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:59:54.481849 1649559 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:59:54.481935 1649559 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:59:55.287094 1649559 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:59:56.158431 1649559 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:59:57.456748 1649559 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:59:57.893621 1649559 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:59:58.613846 1649559 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:59:58.614156 1649559 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:59:58.621523 1649559 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:59:58.624837 1649559 out.go:252]   - Booting up control plane ...
	I1119 02:59:58.624947 1649559 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:59:58.625029 1649559 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:59:58.625104 1649559 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:59:58.641869 1649559 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:59:58.641978 1649559 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:59:58.644180 1649559 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:59:58.644516 1649559 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:59:58.644733 1649559 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:59:58.799023 1649559 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:59:58.799148 1649559 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:59:57.615890 1651562 out.go:252]   - Generating certificates and keys ...
	I1119 02:59:57.615983 1651562 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:59:57.616056 1651562 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:59:57.931481 1651562 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:59:58.254179 1651562 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:59:58.903299 1651562 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:59:59.307143 1651562 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 03:00:01.600716 1651562 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 03:00:01.601122 1651562 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-592123 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:59:59.804890 1649559 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003404958s
	I1119 02:59:59.811160 1649559 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:59:59.811841 1649559 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1119 02:59:59.812164 1649559 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:59:59.812798 1649559 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 03:00:01.746721 1651562 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 03:00:01.747276 1651562 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-592123 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 03:00:02.693843 1651562 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 03:00:03.414378 1651562 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 03:00:03.769844 1651562 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 03:00:03.769919 1651562 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 03:00:03.889882 1651562 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 03:00:04.967668 1651562 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 03:00:05.605369 1651562 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 03:00:06.053090 1651562 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 03:00:06.537904 1651562 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 03:00:06.538006 1651562 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 03:00:06.541942 1651562 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 03:00:06.545273 1651562 out.go:252]   - Booting up control plane ...
	I1119 03:00:06.545414 1651562 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 03:00:06.545496 1651562 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 03:00:06.545582 1651562 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 03:00:06.577917 1651562 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 03:00:06.578316 1651562 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 03:00:06.589135 1651562 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 03:00:06.589240 1651562 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 03:00:06.589282 1651562 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 03:00:05.813671 1649559 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.000263278s
	I1119 03:00:07.931468 1649559 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.117903692s
	I1119 03:00:09.814643 1649559 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002069765s
	I1119 03:00:09.837974 1649559 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 03:00:09.853447 1649559 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 03:00:09.873112 1649559 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 03:00:09.873567 1649559 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-579203 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 03:00:09.894010 1649559 kubeadm.go:319] [bootstrap-token] Using token: rlqfzf.sg4zgeq25fu8bm02
	I1119 03:00:09.897151 1649559 out.go:252]   - Configuring RBAC rules ...
	I1119 03:00:09.897279 1649559 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 03:00:09.907391 1649559 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 03:00:09.919354 1649559 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 03:00:09.926913 1649559 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 03:00:09.931632 1649559 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 03:00:09.938940 1649559 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 03:00:10.223760 1649559 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 03:00:10.669273 1649559 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 03:00:11.223790 1649559 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 03:00:11.225300 1649559 kubeadm.go:319] 
	I1119 03:00:11.225382 1649559 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 03:00:11.225389 1649559 kubeadm.go:319] 
	I1119 03:00:11.225470 1649559 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 03:00:11.225481 1649559 kubeadm.go:319] 
	I1119 03:00:11.225530 1649559 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 03:00:11.225978 1649559 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 03:00:11.226044 1649559 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 03:00:11.226050 1649559 kubeadm.go:319] 
	I1119 03:00:11.226107 1649559 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 03:00:11.226112 1649559 kubeadm.go:319] 
	I1119 03:00:11.226161 1649559 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 03:00:11.226166 1649559 kubeadm.go:319] 
	I1119 03:00:11.226220 1649559 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 03:00:11.226298 1649559 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 03:00:11.226369 1649559 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 03:00:11.226374 1649559 kubeadm.go:319] 
	I1119 03:00:11.226710 1649559 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 03:00:11.226849 1649559 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 03:00:11.226877 1649559 kubeadm.go:319] 
	I1119 03:00:11.227187 1649559 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token rlqfzf.sg4zgeq25fu8bm02 \
	I1119 03:00:11.227300 1649559 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a \
	I1119 03:00:11.227518 1649559 kubeadm.go:319] 	--control-plane 
	I1119 03:00:11.227529 1649559 kubeadm.go:319] 
	I1119 03:00:11.227812 1649559 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 03:00:11.227822 1649559 kubeadm.go:319] 
	I1119 03:00:11.228115 1649559 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token rlqfzf.sg4zgeq25fu8bm02 \
	I1119 03:00:11.228409 1649559 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a 
	I1119 03:00:11.245840 1649559 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 03:00:11.246074 1649559 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 03:00:11.246191 1649559 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 03:00:11.246206 1649559 cni.go:84] Creating CNI manager for ""
	I1119 03:00:11.246213 1649559 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:00:11.249762 1649559 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 03:00:06.796755 1651562 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 03:00:06.796880 1651562 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 03:00:08.297890 1651562 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501407621s
	I1119 03:00:08.301773 1651562 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 03:00:08.302145 1651562 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 03:00:08.305840 1651562 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 03:00:08.306192 1651562 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 03:00:11.252637 1649559 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 03:00:11.262081 1649559 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 03:00:11.262151 1649559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 03:00:11.297795 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 03:00:11.818180 1649559 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 03:00:11.818402 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-579203 minikube.k8s.io/updated_at=2025_11_19T03_00_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=default-k8s-diff-port-579203 minikube.k8s.io/primary=true
	I1119 03:00:11.818555 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:12.170796 1649559 ops.go:34] apiserver oom_adj: -16
	I1119 03:00:12.170817 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:12.671878 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:13.171612 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:13.671140 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:14.171688 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:14.671307 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:15.171077 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:15.309337 1649559 kubeadm.go:1114] duration metric: took 3.491069896s to wait for elevateKubeSystemPrivileges
	I1119 03:00:15.309363 1649559 kubeadm.go:403] duration metric: took 28.027901223s to StartCluster
	I1119 03:00:15.309437 1649559 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:00:15.309576 1649559 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:00:15.310372 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:00:15.310715 1649559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 03:00:15.310945 1649559 config.go:182] Loaded profile config "default-k8s-diff-port-579203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:00:15.311042 1649559 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:00:15.311103 1649559 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-579203"
	I1119 03:00:15.311118 1649559 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-579203"
	I1119 03:00:15.311138 1649559 host.go:66] Checking if "default-k8s-diff-port-579203" exists ...
	I1119 03:00:15.311801 1649559 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:00:15.311019 1649559 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:00:15.312574 1649559 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-579203"
	I1119 03:00:15.312641 1649559 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-579203"
	I1119 03:00:15.312926 1649559 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:00:15.317624 1649559 out.go:179] * Verifying Kubernetes components...
	I1119 03:00:15.323670 1649559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:00:15.346082 1649559 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:00:12.799369 1651562 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.493048293s
	I1119 03:00:15.116980 1651562 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.810339701s
	I1119 03:00:16.305274 1651562 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002665792s
	I1119 03:00:16.336819 1651562 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 03:00:16.355358 1651562 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 03:00:16.379280 1651562 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 03:00:16.379767 1651562 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-592123 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 03:00:16.398748 1651562 kubeadm.go:319] [bootstrap-token] Using token: madf65.z1gbue97bfudhybf
	I1119 03:00:16.402001 1651562 out.go:252]   - Configuring RBAC rules ...
	I1119 03:00:16.402130 1651562 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 03:00:16.410282 1651562 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 03:00:16.420460 1651562 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 03:00:16.437604 1651562 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 03:00:16.445608 1651562 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 03:00:16.453889 1651562 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 03:00:16.716907 1651562 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 03:00:15.350546 1649559 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:00:15.350578 1649559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:00:15.350651 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 03:00:15.365602 1649559 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-579203"
	I1119 03:00:15.365654 1649559 host.go:66] Checking if "default-k8s-diff-port-579203" exists ...
	I1119 03:00:15.366097 1649559 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:00:15.388394 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 03:00:15.400557 1649559 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:00:15.400584 1649559 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:00:15.400644 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 03:00:15.427881 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 03:00:15.787187 1649559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:00:15.846908 1649559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 03:00:15.880065 1649559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:00:15.882055 1649559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:00:17.053304 1649559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266033736s)
	I1119 03:00:17.053359 1649559 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.206361734s)
	I1119 03:00:17.053370 1649559 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 03:00:17.054427 1649559 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.174287951s)
	I1119 03:00:17.055056 1649559 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-579203" to be "Ready" ...
	I1119 03:00:17.055290 1649559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.173166392s)
	I1119 03:00:17.123729 1649559 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 03:00:17.389219 1651562 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 03:00:17.712816 1651562 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 03:00:17.714408 1651562 kubeadm.go:319] 
	I1119 03:00:17.714483 1651562 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 03:00:17.714490 1651562 kubeadm.go:319] 
	I1119 03:00:17.714570 1651562 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 03:00:17.714575 1651562 kubeadm.go:319] 
	I1119 03:00:17.714601 1651562 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 03:00:17.715081 1651562 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 03:00:17.715141 1651562 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 03:00:17.715146 1651562 kubeadm.go:319] 
	I1119 03:00:17.715203 1651562 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 03:00:17.715208 1651562 kubeadm.go:319] 
	I1119 03:00:17.715258 1651562 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 03:00:17.715263 1651562 kubeadm.go:319] 
	I1119 03:00:17.715317 1651562 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 03:00:17.715396 1651562 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 03:00:17.715467 1651562 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 03:00:17.715472 1651562 kubeadm.go:319] 
	I1119 03:00:17.715757 1651562 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 03:00:17.715844 1651562 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 03:00:17.715848 1651562 kubeadm.go:319] 
	I1119 03:00:17.716138 1651562 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token madf65.z1gbue97bfudhybf \
	I1119 03:00:17.716252 1651562 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a \
	I1119 03:00:17.716445 1651562 kubeadm.go:319] 	--control-plane 
	I1119 03:00:17.716455 1651562 kubeadm.go:319] 
	I1119 03:00:17.716740 1651562 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 03:00:17.716750 1651562 kubeadm.go:319] 
	I1119 03:00:17.717026 1651562 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token madf65.z1gbue97bfudhybf \
	I1119 03:00:17.717318 1651562 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a 
	I1119 03:00:17.722376 1651562 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 03:00:17.722609 1651562 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 03:00:17.722736 1651562 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 03:00:17.722752 1651562 cni.go:84] Creating CNI manager for ""
	I1119 03:00:17.722760 1651562 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:00:17.727760 1651562 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 03:00:17.126627 1649559 addons.go:515] duration metric: took 1.815565892s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 03:00:17.559346 1649559 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-579203" context rescaled to 1 replicas
	I1119 03:00:17.731105 1651562 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 03:00:17.736065 1651562 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 03:00:17.736083 1651562 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 03:00:17.751377 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 03:00:18.075216 1651562 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 03:00:18.075358 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:18.075427 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-592123 minikube.k8s.io/updated_at=2025_11_19T03_00_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=embed-certs-592123 minikube.k8s.io/primary=true
	I1119 03:00:18.231325 1651562 ops.go:34] apiserver oom_adj: -16
	I1119 03:00:18.231441 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:18.732279 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:19.231764 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:19.731559 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:20.232009 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:20.731554 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:21.232258 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:21.731550 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:22.231638 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:22.732352 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:22.833366 1651562 kubeadm.go:1114] duration metric: took 4.758053785s to wait for elevateKubeSystemPrivileges
	I1119 03:00:22.833391 1651562 kubeadm.go:403] duration metric: took 25.614114647s to StartCluster
	I1119 03:00:22.833408 1651562 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:00:22.833467 1651562 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:00:22.834852 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:00:22.835086 1651562 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:00:22.835235 1651562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 03:00:22.835504 1651562 config.go:182] Loaded profile config "embed-certs-592123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:00:22.835536 1651562 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:00:22.835598 1651562 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-592123"
	I1119 03:00:22.835612 1651562 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-592123"
	I1119 03:00:22.835632 1651562 host.go:66] Checking if "embed-certs-592123" exists ...
	I1119 03:00:22.836107 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:00:22.836802 1651562 addons.go:70] Setting default-storageclass=true in profile "embed-certs-592123"
	I1119 03:00:22.836840 1651562 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-592123"
	I1119 03:00:22.837116 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:00:22.838605 1651562 out.go:179] * Verifying Kubernetes components...
	I1119 03:00:22.842092 1651562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:00:22.873658 1651562 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1119 03:00:19.058082 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:21.061636 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:23.558675 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	I1119 03:00:22.876582 1651562 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:00:22.876604 1651562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:00:22.876674 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:00:22.884255 1651562 addons.go:239] Setting addon default-storageclass=true in "embed-certs-592123"
	I1119 03:00:22.884327 1651562 host.go:66] Checking if "embed-certs-592123" exists ...
	I1119 03:00:22.885985 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:00:22.913811 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:00:22.933093 1651562 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:00:22.933122 1651562 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:00:22.933182 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:00:22.958831 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:00:23.184571 1651562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:00:23.257551 1651562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:00:23.270820 1651562 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:00:23.271076 1651562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 03:00:24.103295 1651562 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 03:00:24.106291 1651562 node_ready.go:35] waiting up to 6m0s for node "embed-certs-592123" to be "Ready" ...
	I1119 03:00:24.109606 1651562 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 03:00:24.112563 1651562 addons.go:515] duration metric: took 1.277008933s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 03:00:24.607361 1651562 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-592123" context rescaled to 1 replicas
	W1119 03:00:26.109137 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:25.561925 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:28.058132 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:28.109533 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:30.109686 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:30.061791 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:32.557783 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:32.609789 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:34.610128 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:34.557881 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:36.558049 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:37.110116 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:39.615132 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:39.058085 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:41.058520 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:43.558701 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:42.112316 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:44.610111 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:46.058147 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:48.557815 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:47.109269 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:49.610341 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:50.558720 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:53.058419 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:52.109800 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:54.609765 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:55.060314 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	I1119 03:00:57.058436 1649559 node_ready.go:49] node "default-k8s-diff-port-579203" is "Ready"
	I1119 03:00:57.058467 1649559 node_ready.go:38] duration metric: took 40.003385948s for node "default-k8s-diff-port-579203" to be "Ready" ...
	I1119 03:00:57.058481 1649559 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:00:57.058546 1649559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:00:57.070588 1649559 api_server.go:72] duration metric: took 41.758591784s to wait for apiserver process to appear ...
	I1119 03:00:57.070611 1649559 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:00:57.070629 1649559 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1119 03:00:57.080494 1649559 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1119 03:00:57.081925 1649559 api_server.go:141] control plane version: v1.34.1
	I1119 03:00:57.081953 1649559 api_server.go:131] duration metric: took 11.335422ms to wait for apiserver health ...
	I1119 03:00:57.081963 1649559 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:00:57.085138 1649559 system_pods.go:59] 8 kube-system pods found
	I1119 03:00:57.085182 1649559 system_pods.go:61] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:00:57.085190 1649559 system_pods.go:61] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running
	I1119 03:00:57.085197 1649559 system_pods.go:61] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:00:57.085201 1649559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running
	I1119 03:00:57.085207 1649559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running
	I1119 03:00:57.085213 1649559 system_pods.go:61] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:00:57.085218 1649559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running
	I1119 03:00:57.085224 1649559 system_pods.go:61] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:00:57.085234 1649559 system_pods.go:74] duration metric: took 3.264448ms to wait for pod list to return data ...
	I1119 03:00:57.085247 1649559 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:00:57.087946 1649559 default_sa.go:45] found service account: "default"
	I1119 03:00:57.087975 1649559 default_sa.go:55] duration metric: took 2.720103ms for default service account to be created ...
	I1119 03:00:57.087985 1649559 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 03:00:57.091006 1649559 system_pods.go:86] 8 kube-system pods found
	I1119 03:00:57.091052 1649559 system_pods.go:89] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:00:57.091059 1649559 system_pods.go:89] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running
	I1119 03:00:57.091067 1649559 system_pods.go:89] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:00:57.091072 1649559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running
	I1119 03:00:57.091077 1649559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running
	I1119 03:00:57.091082 1649559 system_pods.go:89] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:00:57.091086 1649559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running
	I1119 03:00:57.091092 1649559 system_pods.go:89] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:00:57.091117 1649559 retry.go:31] will retry after 302.420543ms: missing components: kube-dns
	I1119 03:00:57.398309 1649559 system_pods.go:86] 8 kube-system pods found
	I1119 03:00:57.398342 1649559 system_pods.go:89] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:00:57.398350 1649559 system_pods.go:89] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running
	I1119 03:00:57.398357 1649559 system_pods.go:89] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:00:57.398362 1649559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running
	I1119 03:00:57.398366 1649559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running
	I1119 03:00:57.398372 1649559 system_pods.go:89] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:00:57.398376 1649559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running
	I1119 03:00:57.398382 1649559 system_pods.go:89] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:00:57.398402 1649559 retry.go:31] will retry after 257.32747ms: missing components: kube-dns
	I1119 03:00:57.664889 1649559 system_pods.go:86] 8 kube-system pods found
	I1119 03:00:57.664919 1649559 system_pods.go:89] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:00:57.664927 1649559 system_pods.go:89] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running
	I1119 03:00:57.664933 1649559 system_pods.go:89] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:00:57.664938 1649559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running
	I1119 03:00:57.664942 1649559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running
	I1119 03:00:57.664946 1649559 system_pods.go:89] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:00:57.664950 1649559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running
	I1119 03:00:57.664956 1649559 system_pods.go:89] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:00:57.664974 1649559 retry.go:31] will retry after 356.664094ms: missing components: kube-dns
	I1119 03:00:58.026523 1649559 system_pods.go:86] 8 kube-system pods found
	I1119 03:00:58.026572 1649559 system_pods.go:89] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:00:58.026583 1649559 system_pods.go:89] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running
	I1119 03:00:58.026592 1649559 system_pods.go:89] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:00:58.026597 1649559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running
	I1119 03:00:58.026601 1649559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running
	I1119 03:00:58.026607 1649559 system_pods.go:89] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:00:58.026612 1649559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running
	I1119 03:00:58.026624 1649559 system_pods.go:89] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:00:58.026642 1649559 retry.go:31] will retry after 383.232625ms: missing components: kube-dns
	I1119 03:00:58.413261 1649559 system_pods.go:86] 8 kube-system pods found
	I1119 03:00:58.413294 1649559 system_pods.go:89] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Running
	I1119 03:00:58.413301 1649559 system_pods.go:89] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running
	I1119 03:00:58.413306 1649559 system_pods.go:89] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:00:58.413310 1649559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running
	I1119 03:00:58.413314 1649559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running
	I1119 03:00:58.413319 1649559 system_pods.go:89] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:00:58.413322 1649559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running
	I1119 03:00:58.413327 1649559 system_pods.go:89] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Running
	I1119 03:00:58.413334 1649559 system_pods.go:126] duration metric: took 1.325343399s to wait for k8s-apps to be running ...
	I1119 03:00:58.413345 1649559 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 03:00:58.413412 1649559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:00:58.425892 1649559 system_svc.go:56] duration metric: took 12.537012ms WaitForService to wait for kubelet
	I1119 03:00:58.425971 1649559 kubeadm.go:587] duration metric: took 43.113978853s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:00:58.426003 1649559 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:00:58.428918 1649559 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:00:58.428950 1649559 node_conditions.go:123] node cpu capacity is 2
	I1119 03:00:58.428963 1649559 node_conditions.go:105] duration metric: took 2.9476ms to run NodePressure ...
	I1119 03:00:58.428993 1649559 start.go:242] waiting for startup goroutines ...
	I1119 03:00:58.429007 1649559 start.go:247] waiting for cluster config update ...
	I1119 03:00:58.429019 1649559 start.go:256] writing updated cluster config ...
	I1119 03:00:58.429331 1649559 ssh_runner.go:195] Run: rm -f paused
	I1119 03:00:58.432902 1649559 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:00:58.436665 1649559 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pkngt" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.442062 1649559 pod_ready.go:94] pod "coredns-66bc5c9577-pkngt" is "Ready"
	I1119 03:00:58.442086 1649559 pod_ready.go:86] duration metric: took 5.386882ms for pod "coredns-66bc5c9577-pkngt" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.444419 1649559 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.448806 1649559 pod_ready.go:94] pod "etcd-default-k8s-diff-port-579203" is "Ready"
	I1119 03:00:58.448834 1649559 pod_ready.go:86] duration metric: took 4.39505ms for pod "etcd-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.451127 1649559 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.455594 1649559 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-579203" is "Ready"
	I1119 03:00:58.455619 1649559 pod_ready.go:86] duration metric: took 4.470084ms for pod "kube-apiserver-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.457927 1649559 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.837776 1649559 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-579203" is "Ready"
	I1119 03:00:58.837805 1649559 pod_ready.go:86] duration metric: took 379.853189ms for pod "kube-controller-manager-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:59.037546 1649559 pod_ready.go:83] waiting for pod "kube-proxy-7ncfq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:59.437539 1649559 pod_ready.go:94] pod "kube-proxy-7ncfq" is "Ready"
	I1119 03:00:59.437566 1649559 pod_ready.go:86] duration metric: took 399.953922ms for pod "kube-proxy-7ncfq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:59.638289 1649559 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:00.043762 1649559 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-579203" is "Ready"
	I1119 03:01:00.043871 1649559 pod_ready.go:86] duration metric: took 405.555944ms for pod "kube-scheduler-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:00.043902 1649559 pod_ready.go:40] duration metric: took 1.610970834s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:01:00.239287 1649559 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:01:00.247376 1649559 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-579203" cluster and "default" namespace by default
	W1119 03:00:57.109220 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:59.609331 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:01:01.609678 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:01:04.109974 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	I1119 03:01:04.609470 1651562 node_ready.go:49] node "embed-certs-592123" is "Ready"
	I1119 03:01:04.609501 1651562 node_ready.go:38] duration metric: took 40.50317859s for node "embed-certs-592123" to be "Ready" ...
	I1119 03:01:04.609544 1651562 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:01:04.609604 1651562 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:01:04.624373 1651562 api_server.go:72] duration metric: took 41.789257238s to wait for apiserver process to appear ...
	I1119 03:01:04.624395 1651562 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:01:04.624413 1651562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 03:01:04.637333 1651562 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 03:01:04.638550 1651562 api_server.go:141] control plane version: v1.34.1
	I1119 03:01:04.638578 1651562 api_server.go:131] duration metric: took 14.176177ms to wait for apiserver health ...
	I1119 03:01:04.638587 1651562 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:01:04.649289 1651562 system_pods.go:59] 8 kube-system pods found
	I1119 03:01:04.649329 1651562 system_pods.go:61] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:04.649336 1651562 system_pods.go:61] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running
	I1119 03:01:04.649342 1651562 system_pods.go:61] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:04.649348 1651562 system_pods.go:61] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:04.649353 1651562 system_pods.go:61] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running
	I1119 03:01:04.649359 1651562 system_pods.go:61] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:04.649364 1651562 system_pods.go:61] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running
	I1119 03:01:04.649376 1651562 system_pods.go:61] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:01:04.649385 1651562 system_pods.go:74] duration metric: took 10.79252ms to wait for pod list to return data ...
	I1119 03:01:04.649401 1651562 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:01:04.655345 1651562 default_sa.go:45] found service account: "default"
	I1119 03:01:04.655373 1651562 default_sa.go:55] duration metric: took 5.96476ms for default service account to be created ...
	I1119 03:01:04.655383 1651562 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 03:01:04.659448 1651562 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:04.659478 1651562 system_pods.go:89] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:04.659486 1651562 system_pods.go:89] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running
	I1119 03:01:04.659492 1651562 system_pods.go:89] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:04.659496 1651562 system_pods.go:89] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:04.659501 1651562 system_pods.go:89] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running
	I1119 03:01:04.659505 1651562 system_pods.go:89] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:04.659509 1651562 system_pods.go:89] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running
	I1119 03:01:04.659515 1651562 system_pods.go:89] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:01:04.659537 1651562 retry.go:31] will retry after 250.30161ms: missing components: kube-dns
	I1119 03:01:04.914554 1651562 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:04.914588 1651562 system_pods.go:89] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:04.914596 1651562 system_pods.go:89] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running
	I1119 03:01:04.914604 1651562 system_pods.go:89] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:04.914609 1651562 system_pods.go:89] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:04.914614 1651562 system_pods.go:89] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running
	I1119 03:01:04.914618 1651562 system_pods.go:89] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:04.914623 1651562 system_pods.go:89] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running
	I1119 03:01:04.914629 1651562 system_pods.go:89] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:01:04.914648 1651562 retry.go:31] will retry after 267.466957ms: missing components: kube-dns
	I1119 03:01:05.186184 1651562 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:05.186217 1651562 system_pods.go:89] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:05.186224 1651562 system_pods.go:89] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running
	I1119 03:01:05.186230 1651562 system_pods.go:89] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:05.186235 1651562 system_pods.go:89] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:05.186239 1651562 system_pods.go:89] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running
	I1119 03:01:05.186243 1651562 system_pods.go:89] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:05.186247 1651562 system_pods.go:89] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running
	I1119 03:01:05.186254 1651562 system_pods.go:89] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:01:05.186289 1651562 retry.go:31] will retry after 303.104661ms: missing components: kube-dns
	I1119 03:01:05.493468 1651562 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:05.493530 1651562 system_pods.go:89] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:05.493539 1651562 system_pods.go:89] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running
	I1119 03:01:05.493545 1651562 system_pods.go:89] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:05.493551 1651562 system_pods.go:89] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:05.493557 1651562 system_pods.go:89] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running
	I1119 03:01:05.493561 1651562 system_pods.go:89] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:05.493567 1651562 system_pods.go:89] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running
	I1119 03:01:05.493577 1651562 system_pods.go:89] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:01:05.493595 1651562 retry.go:31] will retry after 486.063624ms: missing components: kube-dns
	I1119 03:01:05.983968 1651562 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:05.984001 1651562 system_pods.go:89] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Running
	I1119 03:01:05.984008 1651562 system_pods.go:89] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running
	I1119 03:01:05.984012 1651562 system_pods.go:89] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:05.984017 1651562 system_pods.go:89] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:05.984023 1651562 system_pods.go:89] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running
	I1119 03:01:05.984027 1651562 system_pods.go:89] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:05.984031 1651562 system_pods.go:89] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running
	I1119 03:01:05.984035 1651562 system_pods.go:89] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Running
	I1119 03:01:05.984044 1651562 system_pods.go:126] duration metric: took 1.328654209s to wait for k8s-apps to be running ...
	I1119 03:01:05.984055 1651562 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 03:01:05.984111 1651562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:01:05.997239 1651562 system_svc.go:56] duration metric: took 13.173473ms WaitForService to wait for kubelet
	I1119 03:01:05.997320 1651562 kubeadm.go:587] duration metric: took 43.162208485s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:01:05.997363 1651562 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:01:06.000555 1651562 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:01:06.000590 1651562 node_conditions.go:123] node cpu capacity is 2
	I1119 03:01:06.000612 1651562 node_conditions.go:105] duration metric: took 3.230136ms to run NodePressure ...
	I1119 03:01:06.000625 1651562 start.go:242] waiting for startup goroutines ...
	I1119 03:01:06.000633 1651562 start.go:247] waiting for cluster config update ...
	I1119 03:01:06.000644 1651562 start.go:256] writing updated cluster config ...
	I1119 03:01:06.000943 1651562 ssh_runner.go:195] Run: rm -f paused
	I1119 03:01:06.007452 1651562 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:01:06.083960 1651562 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vtc44" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.088812 1651562 pod_ready.go:94] pod "coredns-66bc5c9577-vtc44" is "Ready"
	I1119 03:01:06.088842 1651562 pod_ready.go:86] duration metric: took 4.858954ms for pod "coredns-66bc5c9577-vtc44" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.091253 1651562 pod_ready.go:83] waiting for pod "etcd-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.096278 1651562 pod_ready.go:94] pod "etcd-embed-certs-592123" is "Ready"
	I1119 03:01:06.096307 1651562 pod_ready.go:86] duration metric: took 5.028106ms for pod "etcd-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.098875 1651562 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.103481 1651562 pod_ready.go:94] pod "kube-apiserver-embed-certs-592123" is "Ready"
	I1119 03:01:06.103508 1651562 pod_ready.go:86] duration metric: took 4.560067ms for pod "kube-apiserver-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.106053 1651562 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.411948 1651562 pod_ready.go:94] pod "kube-controller-manager-embed-certs-592123" is "Ready"
	I1119 03:01:06.411978 1651562 pod_ready.go:86] duration metric: took 305.893932ms for pod "kube-controller-manager-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.612144 1651562 pod_ready.go:83] waiting for pod "kube-proxy-55pcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:07.011553 1651562 pod_ready.go:94] pod "kube-proxy-55pcf" is "Ready"
	I1119 03:01:07.011582 1651562 pod_ready.go:86] duration metric: took 399.359353ms for pod "kube-proxy-55pcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:07.211682 1651562 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:07.611730 1651562 pod_ready.go:94] pod "kube-scheduler-embed-certs-592123" is "Ready"
	I1119 03:01:07.611757 1651562 pod_ready.go:86] duration metric: took 400.048918ms for pod "kube-scheduler-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:07.611770 1651562 pod_ready.go:40] duration metric: took 1.604283109s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:01:07.675431 1651562 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:01:07.678607 1651562 out.go:179] * Done! kubectl is now configured to use "embed-certs-592123" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 03:00:57 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:00:57.202011218Z" level=info msg="Created container 440046bfb5d9114fe7905ee223d2f6b6ecd8cf769b31d31f6f4d7a2a3ab4b7cb: kube-system/coredns-66bc5c9577-pkngt/coredns" id=0335c6cd-f706-4d5c-9682-4a75af9a5b13 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:00:57 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:00:57.206920764Z" level=info msg="Starting container: 440046bfb5d9114fe7905ee223d2f6b6ecd8cf769b31d31f6f4d7a2a3ab4b7cb" id=3f7a5158-19b1-40fa-83ab-6bf71632589c name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:00:57 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:00:57.209797752Z" level=info msg="Started container" PID=1753 containerID=440046bfb5d9114fe7905ee223d2f6b6ecd8cf769b31d31f6f4d7a2a3ab4b7cb description=kube-system/coredns-66bc5c9577-pkngt/coredns id=3f7a5158-19b1-40fa-83ab-6bf71632589c name=/runtime.v1.RuntimeService/StartContainer sandboxID=47e18728369d7412290a9b9f2143ef210e821a69f07d18db4fb10b047debee18
	Nov 19 03:01:00 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:00.941273561Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ae8cbb7d-ebc4-4959-8b76-057a36ff06b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:01:00 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:00.941347274Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:01:00 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:00.954820078Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ceb8b4187ad002ebd968182200a99d07379d74f4c5cc6549cd4051da5565d689 UID:e24610f2-fbb3-428c-b4a9-925911a13a98 NetNS:/var/run/netns/26e3f7c8-5220-4721-96f7-cc7719500bae Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004ad288}] Aliases:map[]}"
	Nov 19 03:01:00 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:00.955247307Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 03:01:00 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:00.971618687Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ceb8b4187ad002ebd968182200a99d07379d74f4c5cc6549cd4051da5565d689 UID:e24610f2-fbb3-428c-b4a9-925911a13a98 NetNS:/var/run/netns/26e3f7c8-5220-4721-96f7-cc7719500bae Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004ad288}] Aliases:map[]}"
	Nov 19 03:01:00 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:00.971797783Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 03:01:00 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:00.976793251Z" level=info msg="Ran pod sandbox ceb8b4187ad002ebd968182200a99d07379d74f4c5cc6549cd4051da5565d689 with infra container: default/busybox/POD" id=ae8cbb7d-ebc4-4959-8b76-057a36ff06b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:01:00 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:00.978679876Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=325ad507-4db2-4945-ad23-f08e9c9b97b2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:01:00 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:00.978935048Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=325ad507-4db2-4945-ad23-f08e9c9b97b2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:01:00 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:00.978988199Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=325ad507-4db2-4945-ad23-f08e9c9b97b2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:01:00 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:00.980910467Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=79f632e7-71ba-469f-9619-d1ad1eb78230 name=/runtime.v1.ImageService/PullImage
	Nov 19 03:01:00 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:00.983676123Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 03:01:03 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:03.171817585Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=79f632e7-71ba-469f-9619-d1ad1eb78230 name=/runtime.v1.ImageService/PullImage
	Nov 19 03:01:03 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:03.172559201Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=642d32d4-4061-4846-b0a6-ca86991e46f3 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:01:03 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:03.17445815Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4925eb05-824a-4104-aa4a-8d01096fb6b4 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:01:03 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:03.18219068Z" level=info msg="Creating container: default/busybox/busybox" id=df9aec6b-fc08-4462-b83f-21f0f8486596 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:01:03 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:03.182322442Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:01:03 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:03.190087422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:01:03 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:03.190598661Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:01:03 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:03.205762592Z" level=info msg="Created container a0e31cdbb34ce05b30e99292ed2abd598f24cd1e75711bf9e6cc2e9fcab751cb: default/busybox/busybox" id=df9aec6b-fc08-4462-b83f-21f0f8486596 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:01:03 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:03.206970271Z" level=info msg="Starting container: a0e31cdbb34ce05b30e99292ed2abd598f24cd1e75711bf9e6cc2e9fcab751cb" id=622c8a50-b48c-4fd4-9980-b500ee10f157 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:01:03 default-k8s-diff-port-579203 crio[835]: time="2025-11-19T03:01:03.212703145Z" level=info msg="Started container" PID=1808 containerID=a0e31cdbb34ce05b30e99292ed2abd598f24cd1e75711bf9e6cc2e9fcab751cb description=default/busybox/busybox id=622c8a50-b48c-4fd4-9980-b500ee10f157 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ceb8b4187ad002ebd968182200a99d07379d74f4c5cc6549cd4051da5565d689
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	a0e31cdbb34ce       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   ceb8b4187ad00       busybox                                                default
	440046bfb5d91       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   47e18728369d7       coredns-66bc5c9577-pkngt                               kube-system
	4d37d47d8368b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   ea77156b2a075       storage-provisioner                                    kube-system
	89f87dfc74abc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   4f24ba80852ed       kindnet-bt849                                          kube-system
	fbe4d87886add       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   adad2e8eec448       kube-proxy-7ncfq                                       kube-system
	4ac47072068fe       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   6316929d572bf       kube-scheduler-default-k8s-diff-port-579203            kube-system
	8e6260991e9f1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   6df98f929fb39       kube-controller-manager-default-k8s-diff-port-579203   kube-system
	8cce891df76a5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   3e3fc4d9e3c5a       kube-apiserver-default-k8s-diff-port-579203            kube-system
	31429de3a3480       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   b7c1fc5778d75       etcd-default-k8s-diff-port-579203                      kube-system
	
	
	==> coredns [440046bfb5d9114fe7905ee223d2f6b6ecd8cf769b31d31f6f4d7a2a3ab4b7cb] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42659 - 10383 "HINFO IN 3388369953562570658.1348855203595443263. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011925443s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-579203
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-579203
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=default-k8s-diff-port-579203
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T03_00_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 03:00:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-579203
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 03:01:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 03:00:56 +0000   Wed, 19 Nov 2025 03:00:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 03:00:56 +0000   Wed, 19 Nov 2025 03:00:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 03:00:56 +0000   Wed, 19 Nov 2025 03:00:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 03:00:56 +0000   Wed, 19 Nov 2025 03:00:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-579203
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                7a64d282-4275-4f3a-a03c-1a14359e0c92
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-pkngt                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-579203                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-bt849                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-579203             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-579203    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-7ncfq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-579203             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-579203 event: Registered Node default-k8s-diff-port-579203 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-579203 status is now: NodeReady
	
	
	==> dmesg <==
	[ +37.747558] overlayfs: idmapped layers are currently not supported
	[Nov19 02:37] overlayfs: idmapped layers are currently not supported
	[Nov19 02:38] overlayfs: idmapped layers are currently not supported
	[Nov19 02:39] overlayfs: idmapped layers are currently not supported
	[Nov19 02:41] overlayfs: idmapped layers are currently not supported
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	[Nov19 03:00] overlayfs: idmapped layers are currently not supported
	[  +8.385032] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [31429de3a34808dc89b2aaa1a0123fb106f136c0204edbc8adb9a317a999841e] <==
	{"level":"warn","ts":"2025-11-19T03:00:05.082190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.114594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.165712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.210649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.275845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.309871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.377799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.429654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.455822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.503534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.560344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.615060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.652335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.776329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.790237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.838094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.866235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.893739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.910072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.941909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:05.990196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:06.050082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:06.089740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:06.110483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:06.176000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38518","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:01:11 up 10:43,  0 user,  load average: 2.45, 2.96, 2.55
	Linux default-k8s-diff-port-579203 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [89f87dfc74abc9f3ff1dac51f10ab91473e4e1cab50fcdeb4f2397936c6f95ff] <==
	I1119 03:00:16.341293       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 03:00:16.341545       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 03:00:16.342561       1 main.go:148] setting mtu 1500 for CNI 
	I1119 03:00:16.342576       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 03:00:16.425701       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T03:00:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 03:00:16.642475       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 03:00:16.642508       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 03:00:16.642518       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 03:00:16.646907       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 03:00:46.643089       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 03:00:46.643089       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 03:00:46.643195       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 03:00:46.644429       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 03:00:48.042711       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 03:00:48.042743       1 metrics.go:72] Registering metrics
	I1119 03:00:48.042819       1 controller.go:711] "Syncing nftables rules"
	I1119 03:00:56.645686       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 03:00:56.645830       1 main.go:301] handling current node
	I1119 03:01:06.639617       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 03:01:06.639649       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8cce891df76a58f86064cf949fa194352a43bdc85165be683da0366684af7bfe] <==
	I1119 03:00:07.950693       1 controller.go:667] quota admission added evaluator for: namespaces
	E1119 03:00:07.971820       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1119 03:00:07.991195       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:00:08.021649       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 03:00:08.052590       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:00:08.055037       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 03:00:08.071907       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 03:00:08.256434       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 03:00:08.271387       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 03:00:08.271481       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 03:00:09.365074       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 03:00:09.441911       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 03:00:09.586993       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	I1119 03:00:09.591053       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1119 03:00:09.609828       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1119 03:00:09.611641       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 03:00:09.630568       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 03:00:10.638521       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 03:00:10.666963       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 03:00:10.688843       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 03:00:15.546545       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 03:00:15.666014       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 03:00:15.797069       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:00:15.842471       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1119 03:01:09.766435       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:40880: use of closed network connection
	
	
	==> kube-controller-manager [8e6260991e9f1128205f45517439ad4289378c86cd2db8cd2af5354c2e09e55e] <==
	I1119 03:00:14.582930       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 03:00:14.586104       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 03:00:14.586155       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 03:00:14.586241       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 03:00:14.593693       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:00:14.593722       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 03:00:14.603741       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1119 03:00:14.603800       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 03:00:14.609994       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 03:00:14.615697       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 03:00:14.624640       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 03:00:14.624720       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 03:00:14.624781       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 03:00:14.625102       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-579203"
	I1119 03:00:14.625377       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 03:00:14.627409       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 03:00:14.639380       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 03:00:14.639574       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 03:00:14.642728       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 03:00:14.678562       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 03:00:14.717704       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:00:14.774392       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:00:14.774417       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 03:00:14.774426       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 03:00:59.632749       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [fbe4d87886add596af805a9ba37eb12c7360f220d585f5b4ed4b214499e068b3] <==
	I1119 03:00:16.413899       1 server_linux.go:53] "Using iptables proxy"
	I1119 03:00:16.672626       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 03:00:16.776655       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 03:00:16.776698       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 03:00:16.776778       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 03:00:16.865764       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 03:00:16.865895       1 server_linux.go:132] "Using iptables Proxier"
	I1119 03:00:16.886747       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 03:00:16.887137       1 server.go:527] "Version info" version="v1.34.1"
	I1119 03:00:16.887332       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:00:16.888625       1 config.go:200] "Starting service config controller"
	I1119 03:00:16.897389       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 03:00:16.897460       1 config.go:106] "Starting endpoint slice config controller"
	I1119 03:00:16.897490       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 03:00:16.897599       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 03:00:16.897633       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 03:00:16.898371       1 config.go:309] "Starting node config controller"
	I1119 03:00:16.898423       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 03:00:16.898451       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 03:00:16.999940       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 03:00:17.000038       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 03:00:17.000063       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4ac47072068fe5fb48e42241f7732f41af352b98dacf713f9be297ec1a97184a] <==
	I1119 03:00:07.911390       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:00:07.911518       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 03:00:07.909444       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1119 03:00:07.917289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 03:00:07.937415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 03:00:07.937602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 03:00:07.945566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 03:00:07.953387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 03:00:07.953618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 03:00:07.953779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 03:00:07.953895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 03:00:07.954043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 03:00:07.954147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 03:00:07.954438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 03:00:07.954533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 03:00:07.954642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 03:00:07.954752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 03:00:07.954866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 03:00:07.955006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 03:00:07.955124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 03:00:07.955189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 03:00:07.955239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 03:00:08.888029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 03:00:08.900375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1119 03:00:09.513839       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 03:00:12 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:12.188942    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-579203" podStartSLOduration=1.18892337 podStartE2EDuration="1.18892337s" podCreationTimestamp="2025-11-19 03:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:00:12.166059424 +0000 UTC m=+1.580124886" watchObservedRunningTime="2025-11-19 03:00:12.18892337 +0000 UTC m=+1.602988816"
	Nov 19 03:00:12 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:12.236148    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-579203" podStartSLOduration=1.236131957 podStartE2EDuration="1.236131957s" podCreationTimestamp="2025-11-19 03:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:00:12.189950893 +0000 UTC m=+1.604016339" watchObservedRunningTime="2025-11-19 03:00:12.236131957 +0000 UTC m=+1.650197420"
	Nov 19 03:00:12 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:12.236276    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-579203" podStartSLOduration=1.236268544 podStartE2EDuration="1.236268544s" podCreationTimestamp="2025-11-19 03:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:00:12.232922195 +0000 UTC m=+1.646987665" watchObservedRunningTime="2025-11-19 03:00:12.236268544 +0000 UTC m=+1.650333998"
	Nov 19 03:00:14 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:14.608360    1334 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 03:00:14 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:14.609404    1334 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 03:00:15 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:15.845083    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2cd4821b-c2c9-4f47-b5de-93e55c8f8c38-kube-proxy\") pod \"kube-proxy-7ncfq\" (UID: \"2cd4821b-c2c9-4f47-b5de-93e55c8f8c38\") " pod="kube-system/kube-proxy-7ncfq"
	Nov 19 03:00:15 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:15.845124    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5690abd0-63a3-4580-a0bf-a259dc29f6d0-lib-modules\") pod \"kindnet-bt849\" (UID: \"5690abd0-63a3-4580-a0bf-a259dc29f6d0\") " pod="kube-system/kindnet-bt849"
	Nov 19 03:00:15 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:15.845146    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv5r6\" (UniqueName: \"kubernetes.io/projected/5690abd0-63a3-4580-a0bf-a259dc29f6d0-kube-api-access-vv5r6\") pod \"kindnet-bt849\" (UID: \"5690abd0-63a3-4580-a0bf-a259dc29f6d0\") " pod="kube-system/kindnet-bt849"
	Nov 19 03:00:15 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:15.845174    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5690abd0-63a3-4580-a0bf-a259dc29f6d0-xtables-lock\") pod \"kindnet-bt849\" (UID: \"5690abd0-63a3-4580-a0bf-a259dc29f6d0\") " pod="kube-system/kindnet-bt849"
	Nov 19 03:00:15 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:15.845216    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cd4821b-c2c9-4f47-b5de-93e55c8f8c38-lib-modules\") pod \"kube-proxy-7ncfq\" (UID: \"2cd4821b-c2c9-4f47-b5de-93e55c8f8c38\") " pod="kube-system/kube-proxy-7ncfq"
	Nov 19 03:00:15 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:15.845236    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6mrn\" (UniqueName: \"kubernetes.io/projected/2cd4821b-c2c9-4f47-b5de-93e55c8f8c38-kube-api-access-h6mrn\") pod \"kube-proxy-7ncfq\" (UID: \"2cd4821b-c2c9-4f47-b5de-93e55c8f8c38\") " pod="kube-system/kube-proxy-7ncfq"
	Nov 19 03:00:15 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:15.845254    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cd4821b-c2c9-4f47-b5de-93e55c8f8c38-xtables-lock\") pod \"kube-proxy-7ncfq\" (UID: \"2cd4821b-c2c9-4f47-b5de-93e55c8f8c38\") " pod="kube-system/kube-proxy-7ncfq"
	Nov 19 03:00:15 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:15.845274    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5690abd0-63a3-4580-a0bf-a259dc29f6d0-cni-cfg\") pod \"kindnet-bt849\" (UID: \"5690abd0-63a3-4580-a0bf-a259dc29f6d0\") " pod="kube-system/kindnet-bt849"
	Nov 19 03:00:16 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:16.022332    1334 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 03:00:16 default-k8s-diff-port-579203 kubelet[1334]: W1119 03:00:16.139526    1334 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/crio-4f24ba80852edfa6d209e6aa426ab67e8336dfeef7eda2cea063e8016df06758 WatchSource:0}: Error finding container 4f24ba80852edfa6d209e6aa426ab67e8336dfeef7eda2cea063e8016df06758: Status 404 returned error can't find the container with id 4f24ba80852edfa6d209e6aa426ab67e8336dfeef7eda2cea063e8016df06758
	Nov 19 03:00:17 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:17.175592    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7ncfq" podStartSLOduration=2.175574822 podStartE2EDuration="2.175574822s" podCreationTimestamp="2025-11-19 03:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:00:17.175425805 +0000 UTC m=+6.589491251" watchObservedRunningTime="2025-11-19 03:00:17.175574822 +0000 UTC m=+6.589640276"
	Nov 19 03:00:17 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:17.175706    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bt849" podStartSLOduration=2.17569934 podStartE2EDuration="2.17569934s" podCreationTimestamp="2025-11-19 03:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:00:17.122880566 +0000 UTC m=+6.536946011" watchObservedRunningTime="2025-11-19 03:00:17.17569934 +0000 UTC m=+6.589764794"
	Nov 19 03:00:56 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:56.757289    1334 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 03:00:56 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:56.837744    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d74743aa-7170-415b-9f00-b196bc8b9837-config-volume\") pod \"coredns-66bc5c9577-pkngt\" (UID: \"d74743aa-7170-415b-9f00-b196bc8b9837\") " pod="kube-system/coredns-66bc5c9577-pkngt"
	Nov 19 03:00:56 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:56.837802    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9639e9e0-73e8-48ed-a25a-603c687470cd-tmp\") pod \"storage-provisioner\" (UID: \"9639e9e0-73e8-48ed-a25a-603c687470cd\") " pod="kube-system/storage-provisioner"
	Nov 19 03:00:56 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:56.837825    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frch5\" (UniqueName: \"kubernetes.io/projected/9639e9e0-73e8-48ed-a25a-603c687470cd-kube-api-access-frch5\") pod \"storage-provisioner\" (UID: \"9639e9e0-73e8-48ed-a25a-603c687470cd\") " pod="kube-system/storage-provisioner"
	Nov 19 03:00:56 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:56.837851    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26llx\" (UniqueName: \"kubernetes.io/projected/d74743aa-7170-415b-9f00-b196bc8b9837-kube-api-access-26llx\") pod \"coredns-66bc5c9577-pkngt\" (UID: \"d74743aa-7170-415b-9f00-b196bc8b9837\") " pod="kube-system/coredns-66bc5c9577-pkngt"
	Nov 19 03:00:58 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:58.184325    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pkngt" podStartSLOduration=43.184305865 podStartE2EDuration="43.184305865s" podCreationTimestamp="2025-11-19 03:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:00:58.183834896 +0000 UTC m=+47.597900350" watchObservedRunningTime="2025-11-19 03:00:58.184305865 +0000 UTC m=+47.598371319"
	Nov 19 03:00:58 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:00:58.225007    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.224988616 podStartE2EDuration="41.224988616s" podCreationTimestamp="2025-11-19 03:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:00:58.224821392 +0000 UTC m=+47.638886862" watchObservedRunningTime="2025-11-19 03:00:58.224988616 +0000 UTC m=+47.639054070"
	Nov 19 03:01:00 default-k8s-diff-port-579203 kubelet[1334]: I1119 03:01:00.777102    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqwk\" (UniqueName: \"kubernetes.io/projected/e24610f2-fbb3-428c-b4a9-925911a13a98-kube-api-access-hvqwk\") pod \"busybox\" (UID: \"e24610f2-fbb3-428c-b4a9-925911a13a98\") " pod="default/busybox"
	
	
	==> storage-provisioner [4d37d47d8368b75d19580a9e2cc0b76d2602966b16f8957ddbfc2de5dc47a377] <==
	I1119 03:00:57.184135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 03:00:57.199558       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 03:00:57.200391       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 03:00:57.203467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:00:57.218574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:00:57.218729       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 03:00:57.222712       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-579203_35526b91-867b-4bab-9b97-f929e6ed68ea!
	I1119 03:00:57.238181       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9af3f9a5-889b-4042-b73a-79c73b0a4e8f", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-579203_35526b91-867b-4bab-9b97-f929e6ed68ea became leader
	W1119 03:00:57.244773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:00:57.253749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:00:57.323311       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-579203_35526b91-867b-4bab-9b97-f929e6ed68ea!
	W1119 03:00:59.257824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:00:59.264703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:01.292905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:01.314718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:03.318410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:03.323009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:05.326307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:05.330845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:07.334041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:07.340651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:09.343346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:09.347987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:11.352159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:11.360936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-579203 -n default-k8s-diff-port-579203
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-579203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-592123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-592123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (247.472242ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:01:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-592123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-592123 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-592123 describe deploy/metrics-server -n kube-system: exit status 1 (82.681959ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-592123 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-592123
helpers_test.go:243: (dbg) docker inspect embed-certs-592123:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e",
	        "Created": "2025-11-19T02:59:47.671670147Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1652490,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T02:59:47.727152854Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/hostname",
	        "HostsPath": "/var/lib/docker/containers/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/hosts",
	        "LogPath": "/var/lib/docker/containers/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e-json.log",
	        "Name": "/embed-certs-592123",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-592123:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-592123",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e",
	                "LowerDir": "/var/lib/docker/overlay2/0339914a5a3675144df08f1c4c574bd9322eef4783e3f9e23b63823595a97dd7-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0339914a5a3675144df08f1c4c574bd9322eef4783e3f9e23b63823595a97dd7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0339914a5a3675144df08f1c4c574bd9322eef4783e3f9e23b63823595a97dd7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0339914a5a3675144df08f1c4c574bd9322eef4783e3f9e23b63823595a97dd7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-592123",
	                "Source": "/var/lib/docker/volumes/embed-certs-592123/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-592123",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-592123",
	                "name.minikube.sigs.k8s.io": "embed-certs-592123",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99037c62b153c7cfa32ecf2b3f4c66b2c6107fc58e6a42737151b32301448e99",
	            "SandboxKey": "/var/run/docker/netns/99037c62b153",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34910"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34911"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34914"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34912"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34913"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-592123": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:66:b1:e6:95:d0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b71e8f31cf38cfb3f1f6842ca4b0d69a179bc8211fb70e2032bcc5a594b1fbd8",
	                    "EndpointID": "b11980bd8778d4e67b828b3ba37767b8527e321103eed9836d377c26c7285a7e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-592123",
	                        "dac66acc5df4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-592123 -n embed-certs-592123
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-592123 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-592123 logs -n 25: (1.148642988s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-889743 sudo crio config                                                                                                                                                                                                             │ cilium-889743                │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │                     │
	│ delete  │ -p cilium-889743                                                                                                                                                                                                                              │ cilium-889743                │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:55 UTC │
	│ start   │ -p force-systemd-env-335811 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-335811     │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p cert-expiration-422184 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:55 UTC │ 19 Nov 25 02:56 UTC │
	│ delete  │ -p force-systemd-env-335811                                                                                                                                                                                                                   │ force-systemd-env-335811     │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p cert-options-702842 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-702842          │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ ssh     │ cert-options-702842 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-702842          │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ ssh     │ -p cert-options-702842 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-702842          │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ delete  │ -p cert-options-702842                                                                                                                                                                                                                        │ cert-options-702842          │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-525469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │                     │
	│ stop    │ -p old-k8s-version-525469 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-525469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:58 UTC │
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:59 UTC │
	│ image   │ old-k8s-version-525469 image list --format=json                                                                                                                                                                                               │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ pause   │ -p old-k8s-version-525469 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │                     │
	│ start   │ -p cert-expiration-422184 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ delete  │ -p cert-expiration-422184                                                                                                                                                                                                                     │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-579203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-579203 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-592123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 02:59:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 02:59:41.740900 1651562 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:59:41.741124 1651562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:59:41.741162 1651562 out.go:374] Setting ErrFile to fd 2...
	I1119 02:59:41.741181 1651562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:59:41.741550 1651562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:59:41.742061 1651562 out.go:368] Setting JSON to false
	I1119 02:59:41.743122 1651562 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38509,"bootTime":1763482673,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 02:59:41.743222 1651562 start.go:143] virtualization:  
	I1119 02:59:41.748050 1651562 out.go:179] * [embed-certs-592123] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 02:59:41.751482 1651562 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:59:41.751651 1651562 notify.go:221] Checking for updates...
	I1119 02:59:41.757791 1651562 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:59:41.760880 1651562 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:59:41.764275 1651562 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 02:59:41.767347 1651562 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 02:59:41.770428 1651562 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:59:41.773995 1651562 config.go:182] Loaded profile config "default-k8s-diff-port-579203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:59:41.774108 1651562 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:59:41.814552 1651562 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 02:59:41.814696 1651562 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:59:41.876041 1651562 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 02:59:41.866352719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:59:41.876155 1651562 docker.go:319] overlay module found
	I1119 02:59:41.879324 1651562 out.go:179] * Using the docker driver based on user configuration
	I1119 02:59:41.882243 1651562 start.go:309] selected driver: docker
	I1119 02:59:41.882261 1651562 start.go:930] validating driver "docker" against <nil>
	I1119 02:59:41.882274 1651562 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:59:41.882990 1651562 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:59:41.974310 1651562 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-19 02:59:41.9588122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:59:41.974455 1651562 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 02:59:41.974695 1651562 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 02:59:41.977603 1651562 out.go:179] * Using Docker driver with root privileges
	I1119 02:59:41.981341 1651562 cni.go:84] Creating CNI manager for ""
	I1119 02:59:41.981418 1651562 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:59:41.981434 1651562 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 02:59:41.981554 1651562 start.go:353] cluster config:
	{Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:59:41.987135 1651562 out.go:179] * Starting "embed-certs-592123" primary control-plane node in "embed-certs-592123" cluster
	I1119 02:59:41.989994 1651562 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 02:59:41.992938 1651562 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 02:59:41.998463 1651562 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:59:41.998515 1651562 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 02:59:41.998526 1651562 cache.go:65] Caching tarball of preloaded images
	I1119 02:59:41.998636 1651562 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 02:59:41.998652 1651562 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 02:59:41.998767 1651562 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/config.json ...
	I1119 02:59:41.998789 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/config.json: {Name:mk4ec892ed5c5973512217c122e473e16e420a46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:41.998948 1651562 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 02:59:42.035122 1651562 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 02:59:42.035147 1651562 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 02:59:42.035161 1651562 cache.go:243] Successfully downloaded all kic artifacts
	I1119 02:59:42.035200 1651562 start.go:360] acquireMachinesLock for embed-certs-592123: {Name:mkad274f419d3f3256db7dae28b742586dc2ebd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 02:59:42.035313 1651562 start.go:364] duration metric: took 94.897µs to acquireMachinesLock for "embed-certs-592123"
	I1119 02:59:42.035340 1651562 start.go:93] Provisioning new machine with config: &{Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 02:59:42.035408 1651562 start.go:125] createHost starting for "" (driver="docker")
	I1119 02:59:40.117240 1649559 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-579203
	
	I1119 02:59:40.117259 1649559 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-579203"
	I1119 02:59:40.117323 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:40.136516 1649559 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:40.136865 1649559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34905 <nil> <nil>}
	I1119 02:59:40.136885 1649559 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-579203 && echo "default-k8s-diff-port-579203" | sudo tee /etc/hostname
	I1119 02:59:40.287384 1649559 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-579203
	
	I1119 02:59:40.287458 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:40.308771 1649559 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:40.309081 1649559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34905 <nil> <nil>}
	I1119 02:59:40.309099 1649559 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-579203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-579203/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-579203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:59:40.450022 1649559 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:59:40.450047 1649559 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 02:59:40.450069 1649559 ubuntu.go:190] setting up certificates
	I1119 02:59:40.450077 1649559 provision.go:84] configureAuth start
	I1119 02:59:40.450141 1649559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-579203
	I1119 02:59:40.466967 1649559 provision.go:143] copyHostCerts
	I1119 02:59:40.467036 1649559 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 02:59:40.467049 1649559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 02:59:40.467126 1649559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 02:59:40.467221 1649559 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 02:59:40.467231 1649559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 02:59:40.467258 1649559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 02:59:40.467319 1649559 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 02:59:40.467328 1649559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 02:59:40.467352 1649559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 02:59:40.467406 1649559 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-579203 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-579203 localhost minikube]
	I1119 02:59:40.925159 1649559 provision.go:177] copyRemoteCerts
	I1119 02:59:40.925278 1649559 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:59:40.925354 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:40.948168 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 02:59:41.086276 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 02:59:41.130722 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 02:59:41.165790 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 02:59:41.187940 1649559 provision.go:87] duration metric: took 737.837732ms to configureAuth
	I1119 02:59:41.187974 1649559 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:59:41.188143 1649559 config.go:182] Loaded profile config "default-k8s-diff-port-579203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:59:41.188260 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:41.207545 1649559 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:41.207858 1649559 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34905 <nil> <nil>}
	I1119 02:59:41.207879 1649559 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:59:41.576495 1649559 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:59:41.576521 1649559 machine.go:97] duration metric: took 4.64936347s to provisionDockerMachine
	I1119 02:59:41.576532 1649559 client.go:176] duration metric: took 12.188062311s to LocalClient.Create
	I1119 02:59:41.576546 1649559 start.go:167] duration metric: took 12.188178067s to libmachine.API.Create "default-k8s-diff-port-579203"
	I1119 02:59:41.576554 1649559 start.go:293] postStartSetup for "default-k8s-diff-port-579203" (driver="docker")
	I1119 02:59:41.576565 1649559 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:59:41.576632 1649559 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:59:41.576683 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:41.601565 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 02:59:41.703121 1649559 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:59:41.707066 1649559 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:59:41.707092 1649559 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:59:41.707104 1649559 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 02:59:41.707160 1649559 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 02:59:41.707242 1649559 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 02:59:41.707349 1649559 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:59:41.715621 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:59:41.734958 1649559 start.go:296] duration metric: took 158.389168ms for postStartSetup
	I1119 02:59:41.735324 1649559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-579203
	I1119 02:59:41.759529 1649559 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/config.json ...
	I1119 02:59:41.759804 1649559 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:59:41.759853 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:41.784522 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 02:59:41.889723 1649559 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:59:41.895092 1649559 start.go:128] duration metric: took 12.510269309s to createHost
	I1119 02:59:41.895114 1649559 start.go:83] releasing machines lock for "default-k8s-diff-port-579203", held for 12.510385408s
	I1119 02:59:41.895193 1649559 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-579203
	I1119 02:59:41.929812 1649559 ssh_runner.go:195] Run: cat /version.json
	I1119 02:59:41.929863 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:41.930105 1649559 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:59:41.930163 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 02:59:41.961605 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 02:59:41.980591 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 02:59:42.218148 1649559 ssh_runner.go:195] Run: systemctl --version
	I1119 02:59:42.226886 1649559 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:59:42.290739 1649559 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:59:42.302418 1649559 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:59:42.302502 1649559 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:59:42.353376 1649559 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 02:59:42.353417 1649559 start.go:496] detecting cgroup driver to use...
	I1119 02:59:42.353452 1649559 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 02:59:42.353536 1649559 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:59:42.389137 1649559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:59:42.411043 1649559 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:59:42.411110 1649559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:59:42.431153 1649559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:59:42.451617 1649559 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:59:42.610929 1649559 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:59:42.795380 1649559 docker.go:234] disabling docker service ...
	I1119 02:59:42.795452 1649559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:59:42.819479 1649559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:59:42.845503 1649559 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:59:42.990136 1649559 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:59:43.172456 1649559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:59:43.186026 1649559 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:59:43.199832 1649559 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:59:43.199896 1649559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.208277 1649559 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 02:59:43.208339 1649559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.216698 1649559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.224481 1649559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.232790 1649559 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:59:43.240475 1649559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.248962 1649559 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.261409 1649559 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:43.269762 1649559 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:59:43.277282 1649559 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:59:43.284690 1649559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:59:43.422190 1649559 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:59:43.817016 1649559 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:59:43.817138 1649559 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:59:43.821723 1649559 start.go:564] Will wait 60s for crictl version
	I1119 02:59:43.821838 1649559 ssh_runner.go:195] Run: which crictl
	I1119 02:59:43.826487 1649559 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:59:43.855888 1649559 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:59:43.856059 1649559 ssh_runner.go:195] Run: crio --version
	I1119 02:59:43.890100 1649559 ssh_runner.go:195] Run: crio --version
	I1119 02:59:43.926386 1649559 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:59:42.038952 1651562 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 02:59:42.039230 1651562 start.go:159] libmachine.API.Create for "embed-certs-592123" (driver="docker")
	I1119 02:59:42.039266 1651562 client.go:173] LocalClient.Create starting
	I1119 02:59:42.039326 1651562 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem
	I1119 02:59:42.039358 1651562 main.go:143] libmachine: Decoding PEM data...
	I1119 02:59:42.039376 1651562 main.go:143] libmachine: Parsing certificate...
	I1119 02:59:42.039432 1651562 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem
	I1119 02:59:42.039452 1651562 main.go:143] libmachine: Decoding PEM data...
	I1119 02:59:42.039462 1651562 main.go:143] libmachine: Parsing certificate...
	I1119 02:59:42.039830 1651562 cli_runner.go:164] Run: docker network inspect embed-certs-592123 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 02:59:42.057249 1651562 cli_runner.go:211] docker network inspect embed-certs-592123 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 02:59:42.057330 1651562 network_create.go:284] running [docker network inspect embed-certs-592123] to gather additional debugging logs...
	I1119 02:59:42.057368 1651562 cli_runner.go:164] Run: docker network inspect embed-certs-592123
	W1119 02:59:42.079920 1651562 cli_runner.go:211] docker network inspect embed-certs-592123 returned with exit code 1
	I1119 02:59:42.079958 1651562 network_create.go:287] error running [docker network inspect embed-certs-592123]: docker network inspect embed-certs-592123: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-592123 not found
	I1119 02:59:42.079983 1651562 network_create.go:289] output of [docker network inspect embed-certs-592123]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-592123 not found
	
	** /stderr **
	I1119 02:59:42.080120 1651562 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:59:42.104421 1651562 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-30778cc553ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:62:24:59:d9:05:e6} reservation:<nil>}
	I1119 02:59:42.104846 1651562 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-564f8befa544 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:bb:c9:f1:3d:0c} reservation:<nil>}
	I1119 02:59:42.105092 1651562 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fccf9ce7bac2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:92:9c:a6:ca:f9:d9} reservation:<nil>}
	I1119 02:59:42.105655 1651562 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400195f490}
	I1119 02:59:42.105688 1651562 network_create.go:124] attempt to create docker network embed-certs-592123 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 02:59:42.105753 1651562 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-592123 embed-certs-592123
	I1119 02:59:42.200247 1651562 network_create.go:108] docker network embed-certs-592123 192.168.76.0/24 created
	I1119 02:59:42.200282 1651562 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-592123" container
	I1119 02:59:42.200392 1651562 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 02:59:42.240539 1651562 cli_runner.go:164] Run: docker volume create embed-certs-592123 --label name.minikube.sigs.k8s.io=embed-certs-592123 --label created_by.minikube.sigs.k8s.io=true
	I1119 02:59:42.267200 1651562 oci.go:103] Successfully created a docker volume embed-certs-592123
	I1119 02:59:42.267296 1651562 cli_runner.go:164] Run: docker run --rm --name embed-certs-592123-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-592123 --entrypoint /usr/bin/test -v embed-certs-592123:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 02:59:42.945845 1651562 oci.go:107] Successfully prepared a docker volume embed-certs-592123
	I1119 02:59:42.945927 1651562 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:59:42.945936 1651562 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 02:59:42.945996 1651562 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-592123:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 02:59:43.930772 1649559 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-579203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:59:43.952088 1649559 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 02:59:43.955890 1649559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:59:43.965152 1649559 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-579203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-579203 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:59:43.965270 1649559 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:59:43.965327 1649559 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:59:44.007341 1649559 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:59:44.007366 1649559 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:59:44.007429 1649559 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:59:44.039862 1649559 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:59:44.039884 1649559 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:59:44.039893 1649559 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1119 02:59:44.039977 1649559 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-579203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-579203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:59:44.040070 1649559 ssh_runner.go:195] Run: crio config
	I1119 02:59:44.127423 1649559 cni.go:84] Creating CNI manager for ""
	I1119 02:59:44.127499 1649559 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:59:44.127537 1649559 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:59:44.127590 1649559 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-579203 NodeName:default-k8s-diff-port-579203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:59:44.127774 1649559 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-579203"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:59:44.127899 1649559 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:59:44.137189 1649559 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:59:44.137356 1649559 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:59:44.146510 1649559 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 02:59:44.161653 1649559 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:59:44.177385 1649559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1119 02:59:44.192277 1649559 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:59:44.196311 1649559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:59:44.206582 1649559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:59:44.347147 1649559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:59:44.366631 1649559 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203 for IP: 192.168.85.2
	I1119 02:59:44.366709 1649559 certs.go:195] generating shared ca certs ...
	I1119 02:59:44.366741 1649559 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:44.366920 1649559 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 02:59:44.367012 1649559 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 02:59:44.367051 1649559 certs.go:257] generating profile certs ...
	I1119 02:59:44.367157 1649559 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.key
	I1119 02:59:44.367205 1649559 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.crt with IP's: []
	I1119 02:59:44.444784 1649559 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.crt ...
	I1119 02:59:44.444816 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.crt: {Name:mk92599fd834df9a9a71b04def0100ad1241cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:44.444985 1649559 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.key ...
	I1119 02:59:44.445009 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.key: {Name:mk6aac44a638809967825a8694552699a0c25c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:44.445091 1649559 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key.1f3db3c7
	I1119 02:59:44.445110 1649559 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt.1f3db3c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 02:59:46.107868 1649559 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt.1f3db3c7 ...
	I1119 02:59:46.107899 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt.1f3db3c7: {Name:mkfe293518e71306c7a9d56cd9d3176e4fdd2703 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:46.108099 1649559 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key.1f3db3c7 ...
	I1119 02:59:46.108114 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key.1f3db3c7: {Name:mkd3825ea6d471bcaa422da590b0ccab060081a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:46.108206 1649559 certs.go:382] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt.1f3db3c7 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt
	I1119 02:59:46.108283 1649559 certs.go:386] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key.1f3db3c7 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key
	I1119 02:59:46.108350 1649559 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.key
	I1119 02:59:46.108375 1649559 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.crt with IP's: []
	I1119 02:59:46.860144 1649559 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.crt ...
	I1119 02:59:46.860182 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.crt: {Name:mkaf4a5e599eba2e347a1d222f3437cd3bcba1f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:46.860381 1649559 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.key ...
	I1119 02:59:46.860398 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.key: {Name:mkeb2a002b8da01b8f2d13893e78203ac4177a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:46.860589 1649559 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 02:59:46.860632 1649559 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 02:59:46.860646 1649559 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 02:59:46.860674 1649559 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 02:59:46.860701 1649559 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:59:46.860728 1649559 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 02:59:46.860774 1649559 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:59:46.861379 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:59:46.880370 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:59:46.898359 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:59:46.916778 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:59:46.933828 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 02:59:46.950398 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 02:59:46.968124 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:59:46.986753 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:59:47.006617 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 02:59:47.035869 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:59:47.054867 1649559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 02:59:47.073224 1649559 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:59:47.087831 1649559 ssh_runner.go:195] Run: openssl version
	I1119 02:59:47.094350 1649559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 02:59:47.102433 1649559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 02:59:47.105979 1649559 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 02:59:47.106039 1649559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 02:59:47.146503 1649559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:59:47.155266 1649559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:59:47.163530 1649559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:59:47.167239 1649559 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:59:47.167311 1649559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:59:47.207924 1649559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:59:47.216090 1649559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 02:59:47.224062 1649559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 02:59:47.228793 1649559 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 02:59:47.228853 1649559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 02:59:47.269583 1649559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 02:59:47.278128 1649559 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:59:47.281415 1649559 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:59:47.281466 1649559 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-579203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-579203 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:59:47.281565 1649559 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:59:47.281629 1649559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:59:47.307907 1649559 cri.go:89] found id: ""
	I1119 02:59:47.308023 1649559 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:59:47.316305 1649559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:59:47.324446 1649559 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:59:47.324505 1649559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:59:47.332956 1649559 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:59:47.332983 1649559 kubeadm.go:158] found existing configuration files:
	
	I1119 02:59:47.333062 1649559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 02:59:47.341155 1649559 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:59:47.341257 1649559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:59:47.348517 1649559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 02:59:47.356093 1649559 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:59:47.356157 1649559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:59:47.363400 1649559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 02:59:47.370785 1649559 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:59:47.370902 1649559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:59:47.378184 1649559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 02:59:47.386252 1649559 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:59:47.386335 1649559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:59:47.394672 1649559 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:59:47.434716 1649559 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:59:47.434800 1649559 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:59:47.459317 1649559 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:59:47.459455 1649559 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 02:59:47.459531 1649559 kubeadm.go:319] OS: Linux
	I1119 02:59:47.459605 1649559 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:59:47.459681 1649559 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 02:59:47.459761 1649559 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:59:47.459836 1649559 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:59:47.459914 1649559 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:59:47.460015 1649559 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:59:47.460097 1649559 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:59:47.460182 1649559 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:59:47.460267 1649559 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 02:59:47.536596 1649559 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:59:47.536738 1649559 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:59:47.536855 1649559 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:59:47.546973 1649559 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:59:47.554165 1649559 out.go:252]   - Generating certificates and keys ...
	I1119 02:59:47.554318 1649559 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:59:47.554437 1649559 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:59:48.484432 1649559 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:59:47.564641 1651562 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-592123:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.618610537s)
	I1119 02:59:47.564682 1651562 kic.go:203] duration metric: took 4.618729745s to extract preloaded images to volume ...
	W1119 02:59:47.564813 1651562 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 02:59:47.564914 1651562 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 02:59:47.652434 1651562 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-592123 --name embed-certs-592123 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-592123 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-592123 --network embed-certs-592123 --ip 192.168.76.2 --volume embed-certs-592123:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 02:59:48.018541 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Running}}
	I1119 02:59:48.049965 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 02:59:48.084262 1651562 cli_runner.go:164] Run: docker exec embed-certs-592123 stat /var/lib/dpkg/alternatives/iptables
	I1119 02:59:48.161231 1651562 oci.go:144] the created container "embed-certs-592123" has a running status.
	I1119 02:59:48.161258 1651562 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa...
	I1119 02:59:48.218622 1651562 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 02:59:48.247775 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 02:59:48.274162 1651562 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 02:59:48.274196 1651562 kic_runner.go:114] Args: [docker exec --privileged embed-certs-592123 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 02:59:48.336036 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 02:59:48.358052 1651562 machine.go:94] provisionDockerMachine start ...
	I1119 02:59:48.358154 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:48.380852 1651562 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:48.381185 1651562 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34910 <nil> <nil>}
	I1119 02:59:48.381241 1651562 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 02:59:48.382060 1651562 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 02:59:51.529615 1651562 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-592123
	
	I1119 02:59:51.529641 1651562 ubuntu.go:182] provisioning hostname "embed-certs-592123"
	I1119 02:59:51.529738 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:51.551525 1651562 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:51.551862 1651562 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34910 <nil> <nil>}
	I1119 02:59:51.551878 1651562 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-592123 && echo "embed-certs-592123" | sudo tee /etc/hostname
	I1119 02:59:51.723783 1651562 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-592123
	
	I1119 02:59:51.723927 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:50.758342 1649559 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:59:51.441906 1649559 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:59:51.804424 1649559 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 02:59:52.017978 1649559 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 02:59:52.018154 1649559 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-579203 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 02:59:52.450964 1649559 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 02:59:52.451137 1649559 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-579203 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 02:59:53.370320 1649559 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 02:59:51.745456 1651562 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:51.745839 1651562 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34910 <nil> <nil>}
	I1119 02:59:51.745865 1651562 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-592123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-592123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-592123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 02:59:51.894530 1651562 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 02:59:51.894556 1651562 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 02:59:51.894585 1651562 ubuntu.go:190] setting up certificates
	I1119 02:59:51.894599 1651562 provision.go:84] configureAuth start
	I1119 02:59:51.894671 1651562 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-592123
	I1119 02:59:51.918123 1651562 provision.go:143] copyHostCerts
	I1119 02:59:51.918201 1651562 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 02:59:51.918221 1651562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 02:59:51.918302 1651562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 02:59:51.918408 1651562 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 02:59:51.918420 1651562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 02:59:51.918458 1651562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 02:59:51.918534 1651562 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 02:59:51.918544 1651562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 02:59:51.918572 1651562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 02:59:51.918638 1651562 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.embed-certs-592123 san=[127.0.0.1 192.168.76.2 embed-certs-592123 localhost minikube]
	I1119 02:59:52.725258 1651562 provision.go:177] copyRemoteCerts
	I1119 02:59:52.725333 1651562 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 02:59:52.725383 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:52.744424 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 02:59:52.854240 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 02:59:52.874211 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 02:59:52.894524 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 02:59:52.914060 1651562 provision.go:87] duration metric: took 1.019439868s to configureAuth
	I1119 02:59:52.914091 1651562 ubuntu.go:206] setting minikube options for container-runtime
	I1119 02:59:52.914279 1651562 config.go:182] Loaded profile config "embed-certs-592123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:59:52.914394 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:52.935758 1651562 main.go:143] libmachine: Using SSH client type: native
	I1119 02:59:52.936099 1651562 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34910 <nil> <nil>}
	I1119 02:59:52.936121 1651562 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 02:59:53.266666 1651562 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 02:59:53.266692 1651562 machine.go:97] duration metric: took 4.908617658s to provisionDockerMachine
	I1119 02:59:53.266708 1651562 client.go:176] duration metric: took 11.227428755s to LocalClient.Create
	I1119 02:59:53.266722 1651562 start.go:167] duration metric: took 11.227493811s to libmachine.API.Create "embed-certs-592123"
	I1119 02:59:53.266729 1651562 start.go:293] postStartSetup for "embed-certs-592123" (driver="docker")
	I1119 02:59:53.266739 1651562 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 02:59:53.266813 1651562 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 02:59:53.266872 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:53.290378 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 02:59:53.402677 1651562 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 02:59:53.406643 1651562 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 02:59:53.406670 1651562 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 02:59:53.406680 1651562 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 02:59:53.406744 1651562 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 02:59:53.406821 1651562 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 02:59:53.406933 1651562 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 02:59:53.415561 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:59:53.443785 1651562 start.go:296] duration metric: took 177.039957ms for postStartSetup
	I1119 02:59:53.444198 1651562 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-592123
	I1119 02:59:53.467148 1651562 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/config.json ...
	I1119 02:59:53.467433 1651562 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:59:53.467485 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:53.483401 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 02:59:53.582621 1651562 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 02:59:53.587474 1651562 start.go:128] duration metric: took 11.552050892s to createHost
	I1119 02:59:53.587498 1651562 start.go:83] releasing machines lock for "embed-certs-592123", held for 11.552176436s
	I1119 02:59:53.587568 1651562 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-592123
	I1119 02:59:53.608785 1651562 ssh_runner.go:195] Run: cat /version.json
	I1119 02:59:53.608837 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:53.612617 1651562 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 02:59:53.612689 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 02:59:53.627558 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 02:59:53.647100 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 02:59:53.742009 1651562 ssh_runner.go:195] Run: systemctl --version
	I1119 02:59:53.850125 1651562 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 02:59:53.894116 1651562 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 02:59:53.898932 1651562 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 02:59:53.899001 1651562 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 02:59:53.943573 1651562 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 02:59:53.943598 1651562 start.go:496] detecting cgroup driver to use...
	I1119 02:59:53.943630 1651562 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 02:59:53.943691 1651562 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 02:59:53.971657 1651562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 02:59:53.990258 1651562 docker.go:218] disabling cri-docker service (if available) ...
	I1119 02:59:53.990374 1651562 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 02:59:54.017248 1651562 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 02:59:54.048368 1651562 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 02:59:54.236533 1651562 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 02:59:54.425139 1651562 docker.go:234] disabling docker service ...
	I1119 02:59:54.425293 1651562 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 02:59:54.456671 1651562 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 02:59:54.474619 1651562 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 02:59:54.629891 1651562 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 02:59:54.790625 1651562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 02:59:54.807456 1651562 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 02:59:54.830612 1651562 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 02:59:54.830719 1651562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.844226 1651562 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 02:59:54.844322 1651562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.854066 1651562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.864548 1651562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.879922 1651562 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 02:59:54.891340 1651562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.900214 1651562 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.914372 1651562 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 02:59:54.923043 1651562 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 02:59:54.931275 1651562 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 02:59:54.939097 1651562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:59:55.089684 1651562 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 02:59:55.281373 1651562 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 02:59:55.281496 1651562 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 02:59:55.285785 1651562 start.go:564] Will wait 60s for crictl version
	I1119 02:59:55.285902 1651562 ssh_runner.go:195] Run: which crictl
	I1119 02:59:55.289964 1651562 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 02:59:55.314003 1651562 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 02:59:55.314162 1651562 ssh_runner.go:195] Run: crio --version
	I1119 02:59:55.347595 1651562 ssh_runner.go:195] Run: crio --version
	I1119 02:59:55.385140 1651562 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 02:59:55.387878 1651562 cli_runner.go:164] Run: docker network inspect embed-certs-592123 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 02:59:55.403037 1651562 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 02:59:55.406910 1651562 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:59:55.416437 1651562 kubeadm.go:884] updating cluster {Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 02:59:55.416551 1651562 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 02:59:55.416605 1651562 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:59:55.449306 1651562 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:59:55.449332 1651562 crio.go:433] Images already preloaded, skipping extraction
	I1119 02:59:55.449384 1651562 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 02:59:55.485101 1651562 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 02:59:55.485125 1651562 cache_images.go:86] Images are preloaded, skipping loading
	I1119 02:59:55.485134 1651562 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 02:59:55.485224 1651562 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-592123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 02:59:55.485332 1651562 ssh_runner.go:195] Run: crio config
	I1119 02:59:55.550342 1651562 cni.go:84] Creating CNI manager for ""
	I1119 02:59:55.550382 1651562 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 02:59:55.550399 1651562 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 02:59:55.550421 1651562 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-592123 NodeName:embed-certs-592123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 02:59:55.550564 1651562 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-592123"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 02:59:55.550649 1651562 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 02:59:55.558545 1651562 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 02:59:55.558628 1651562 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 02:59:55.565842 1651562 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 02:59:55.578288 1651562 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 02:59:55.590460 1651562 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1119 02:59:55.602981 1651562 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 02:59:55.606855 1651562 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 02:59:55.615773 1651562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 02:59:55.762446 1651562 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 02:59:55.778903 1651562 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123 for IP: 192.168.76.2
	I1119 02:59:55.778926 1651562 certs.go:195] generating shared ca certs ...
	I1119 02:59:55.778943 1651562 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:55.779073 1651562 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 02:59:55.779131 1651562 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 02:59:55.779143 1651562 certs.go:257] generating profile certs ...
	I1119 02:59:55.779198 1651562 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.key
	I1119 02:59:55.779214 1651562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.crt with IP's: []
	I1119 02:59:56.082578 1651562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.crt ...
	I1119 02:59:56.082612 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.crt: {Name:mka0659fa46018fedd2261c7d014a8963c3aeb74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:56.082885 1651562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.key ...
	I1119 02:59:56.082902 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.key: {Name:mkaee2d4223d2050f5c8f6cd0f214ebf899b8e7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:56.083055 1651562 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key.9c644e00
	I1119 02:59:56.083090 1651562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt.9c644e00 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 02:59:56.498350 1651562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt.9c644e00 ...
	I1119 02:59:56.498384 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt.9c644e00: {Name:mkbe1588db19fd4b9250e65d26caa9c047847860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:56.498640 1651562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key.9c644e00 ...
	I1119 02:59:56.498659 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key.9c644e00: {Name:mk7f5da4191a12d32f058ab85ca1df365e79b208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:56.498799 1651562 certs.go:382] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt.9c644e00 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt
	I1119 02:59:56.498922 1651562 certs.go:386] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key.9c644e00 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key
	I1119 02:59:56.499009 1651562 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.key
	I1119 02:59:56.499044 1651562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.crt with IP's: []
	I1119 02:59:56.787218 1651562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.crt ...
	I1119 02:59:56.787251 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.crt: {Name:mk0e6b936f5feee524ae96f54d40ee87bb1477d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:56.787505 1651562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.key ...
	I1119 02:59:56.787536 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.key: {Name:mkf09d1cf393e5fa0d0545e06e358f2ba7929abd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 02:59:56.787776 1651562 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 02:59:56.787840 1651562 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 02:59:56.787856 1651562 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 02:59:56.787896 1651562 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 02:59:56.787941 1651562 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 02:59:56.787975 1651562 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 02:59:56.788044 1651562 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 02:59:56.788712 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 02:59:56.808142 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 02:59:56.824739 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 02:59:56.842110 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 02:59:56.859350 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 02:59:56.881066 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 02:59:56.900070 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 02:59:56.918691 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 02:59:56.941952 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 02:59:56.961173 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 02:59:56.980408 1651562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 02:59:56.999414 1651562 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 02:59:57.015694 1651562 ssh_runner.go:195] Run: openssl version
	I1119 02:59:57.022985 1651562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 02:59:57.032129 1651562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 02:59:57.036320 1651562 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 02:59:57.036389 1651562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 02:59:57.077920 1651562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 02:59:57.086972 1651562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 02:59:57.095706 1651562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 02:59:57.099734 1651562 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 02:59:57.099817 1651562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 02:59:57.141323 1651562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 02:59:57.150481 1651562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 02:59:57.159613 1651562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:59:57.164117 1651562 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:59:57.164211 1651562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 02:59:57.205823 1651562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 02:59:57.214890 1651562 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 02:59:57.219228 1651562 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 02:59:57.219290 1651562 kubeadm.go:401] StartCluster: {Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:59:57.219365 1651562 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 02:59:57.219445 1651562 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 02:59:57.246728 1651562 cri.go:89] found id: ""
	I1119 02:59:57.246807 1651562 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 02:59:57.257017 1651562 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 02:59:57.269205 1651562 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 02:59:57.269282 1651562 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 02:59:57.283023 1651562 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 02:59:57.283044 1651562 kubeadm.go:158] found existing configuration files:
	
	I1119 02:59:57.283114 1651562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 02:59:57.293998 1651562 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 02:59:57.294071 1651562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 02:59:57.301096 1651562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 02:59:57.311769 1651562 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 02:59:57.311876 1651562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 02:59:57.326682 1651562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 02:59:57.335217 1651562 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 02:59:57.335296 1651562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 02:59:57.343438 1651562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 02:59:57.352524 1651562 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 02:59:57.352606 1651562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 02:59:57.360645 1651562 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 02:59:57.418099 1651562 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 02:59:57.418506 1651562 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 02:59:57.490935 1651562 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 02:59:57.491016 1651562 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 02:59:57.491070 1651562 kubeadm.go:319] OS: Linux
	I1119 02:59:57.491123 1651562 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 02:59:57.491178 1651562 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 02:59:57.491234 1651562 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 02:59:57.491288 1651562 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 02:59:57.491343 1651562 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 02:59:57.491398 1651562 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 02:59:57.491449 1651562 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 02:59:57.491504 1651562 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 02:59:57.491555 1651562 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 02:59:57.601488 1651562 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 02:59:57.601621 1651562 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 02:59:57.601718 1651562 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 02:59:57.609979 1651562 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 02:59:53.965670 1649559 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 02:59:54.481849 1649559 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 02:59:54.481935 1649559 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 02:59:55.287094 1649559 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 02:59:56.158431 1649559 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 02:59:57.456748 1649559 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 02:59:57.893621 1649559 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 02:59:58.613846 1649559 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 02:59:58.614156 1649559 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 02:59:58.621523 1649559 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 02:59:58.624837 1649559 out.go:252]   - Booting up control plane ...
	I1119 02:59:58.624947 1649559 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 02:59:58.625029 1649559 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 02:59:58.625104 1649559 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 02:59:58.641869 1649559 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 02:59:58.641978 1649559 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 02:59:58.644180 1649559 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 02:59:58.644516 1649559 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 02:59:58.644733 1649559 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 02:59:58.799023 1649559 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 02:59:58.799148 1649559 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 02:59:57.615890 1651562 out.go:252]   - Generating certificates and keys ...
	I1119 02:59:57.615983 1651562 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 02:59:57.616056 1651562 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 02:59:57.931481 1651562 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 02:59:58.254179 1651562 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 02:59:58.903299 1651562 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 02:59:59.307143 1651562 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 03:00:01.600716 1651562 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 03:00:01.601122 1651562 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-592123 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 02:59:59.804890 1649559 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003404958s
	I1119 02:59:59.811160 1649559 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 02:59:59.811841 1649559 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1119 02:59:59.812164 1649559 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 02:59:59.812798 1649559 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 03:00:01.746721 1651562 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 03:00:01.747276 1651562 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-592123 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 03:00:02.693843 1651562 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 03:00:03.414378 1651562 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 03:00:03.769844 1651562 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 03:00:03.769919 1651562 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 03:00:03.889882 1651562 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 03:00:04.967668 1651562 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 03:00:05.605369 1651562 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 03:00:06.053090 1651562 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 03:00:06.537904 1651562 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 03:00:06.538006 1651562 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 03:00:06.541942 1651562 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 03:00:06.545273 1651562 out.go:252]   - Booting up control plane ...
	I1119 03:00:06.545414 1651562 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 03:00:06.545496 1651562 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 03:00:06.545582 1651562 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 03:00:06.577917 1651562 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 03:00:06.578316 1651562 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 03:00:06.589135 1651562 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 03:00:06.589240 1651562 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 03:00:06.589282 1651562 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 03:00:05.813671 1649559 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.000263278s
	I1119 03:00:07.931468 1649559 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.117903692s
	I1119 03:00:09.814643 1649559 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002069765s
	I1119 03:00:09.837974 1649559 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 03:00:09.853447 1649559 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 03:00:09.873112 1649559 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 03:00:09.873567 1649559 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-579203 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 03:00:09.894010 1649559 kubeadm.go:319] [bootstrap-token] Using token: rlqfzf.sg4zgeq25fu8bm02
	I1119 03:00:09.897151 1649559 out.go:252]   - Configuring RBAC rules ...
	I1119 03:00:09.897279 1649559 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 03:00:09.907391 1649559 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 03:00:09.919354 1649559 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 03:00:09.926913 1649559 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 03:00:09.931632 1649559 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 03:00:09.938940 1649559 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 03:00:10.223760 1649559 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 03:00:10.669273 1649559 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 03:00:11.223790 1649559 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 03:00:11.225300 1649559 kubeadm.go:319] 
	I1119 03:00:11.225382 1649559 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 03:00:11.225389 1649559 kubeadm.go:319] 
	I1119 03:00:11.225470 1649559 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 03:00:11.225481 1649559 kubeadm.go:319] 
	I1119 03:00:11.225530 1649559 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 03:00:11.225978 1649559 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 03:00:11.226044 1649559 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 03:00:11.226050 1649559 kubeadm.go:319] 
	I1119 03:00:11.226107 1649559 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 03:00:11.226112 1649559 kubeadm.go:319] 
	I1119 03:00:11.226161 1649559 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 03:00:11.226166 1649559 kubeadm.go:319] 
	I1119 03:00:11.226220 1649559 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 03:00:11.226298 1649559 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 03:00:11.226369 1649559 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 03:00:11.226374 1649559 kubeadm.go:319] 
	I1119 03:00:11.226710 1649559 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 03:00:11.226849 1649559 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 03:00:11.226877 1649559 kubeadm.go:319] 
	I1119 03:00:11.227187 1649559 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token rlqfzf.sg4zgeq25fu8bm02 \
	I1119 03:00:11.227300 1649559 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a \
	I1119 03:00:11.227518 1649559 kubeadm.go:319] 	--control-plane 
	I1119 03:00:11.227529 1649559 kubeadm.go:319] 
	I1119 03:00:11.227812 1649559 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 03:00:11.227822 1649559 kubeadm.go:319] 
	I1119 03:00:11.228115 1649559 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token rlqfzf.sg4zgeq25fu8bm02 \
	I1119 03:00:11.228409 1649559 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a 
	I1119 03:00:11.245840 1649559 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 03:00:11.246074 1649559 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 03:00:11.246191 1649559 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 03:00:11.246206 1649559 cni.go:84] Creating CNI manager for ""
	I1119 03:00:11.246213 1649559 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:00:11.249762 1649559 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 03:00:06.796755 1651562 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 03:00:06.796880 1651562 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 03:00:08.297890 1651562 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501407621s
	I1119 03:00:08.301773 1651562 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 03:00:08.302145 1651562 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 03:00:08.305840 1651562 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 03:00:08.306192 1651562 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 03:00:11.252637 1649559 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 03:00:11.262081 1649559 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 03:00:11.262151 1649559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 03:00:11.297795 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 03:00:11.818180 1649559 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 03:00:11.818402 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-579203 minikube.k8s.io/updated_at=2025_11_19T03_00_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=default-k8s-diff-port-579203 minikube.k8s.io/primary=true
	I1119 03:00:11.818555 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:12.170796 1649559 ops.go:34] apiserver oom_adj: -16
	I1119 03:00:12.170817 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:12.671878 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:13.171612 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:13.671140 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:14.171688 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:14.671307 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:15.171077 1649559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:15.309337 1649559 kubeadm.go:1114] duration metric: took 3.491069896s to wait for elevateKubeSystemPrivileges
	I1119 03:00:15.309363 1649559 kubeadm.go:403] duration metric: took 28.027901223s to StartCluster
	I1119 03:00:15.309437 1649559 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:00:15.309576 1649559 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:00:15.310372 1649559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:00:15.310715 1649559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 03:00:15.310945 1649559 config.go:182] Loaded profile config "default-k8s-diff-port-579203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:00:15.311042 1649559 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:00:15.311103 1649559 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-579203"
	I1119 03:00:15.311118 1649559 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-579203"
	I1119 03:00:15.311138 1649559 host.go:66] Checking if "default-k8s-diff-port-579203" exists ...
	I1119 03:00:15.311801 1649559 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:00:15.311019 1649559 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:00:15.312574 1649559 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-579203"
	I1119 03:00:15.312641 1649559 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-579203"
	I1119 03:00:15.312926 1649559 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:00:15.317624 1649559 out.go:179] * Verifying Kubernetes components...
	I1119 03:00:15.323670 1649559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:00:15.346082 1649559 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:00:12.799369 1651562 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.493048293s
	I1119 03:00:15.116980 1651562 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.810339701s
	I1119 03:00:16.305274 1651562 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002665792s
	I1119 03:00:16.336819 1651562 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 03:00:16.355358 1651562 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 03:00:16.379280 1651562 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 03:00:16.379767 1651562 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-592123 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 03:00:16.398748 1651562 kubeadm.go:319] [bootstrap-token] Using token: madf65.z1gbue97bfudhybf
	I1119 03:00:16.402001 1651562 out.go:252]   - Configuring RBAC rules ...
	I1119 03:00:16.402130 1651562 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 03:00:16.410282 1651562 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 03:00:16.420460 1651562 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 03:00:16.437604 1651562 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 03:00:16.445608 1651562 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 03:00:16.453889 1651562 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 03:00:16.716907 1651562 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 03:00:15.350546 1649559 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:00:15.350578 1649559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:00:15.350651 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 03:00:15.365602 1649559 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-579203"
	I1119 03:00:15.365654 1649559 host.go:66] Checking if "default-k8s-diff-port-579203" exists ...
	I1119 03:00:15.366097 1649559 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:00:15.388394 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 03:00:15.400557 1649559 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:00:15.400584 1649559 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:00:15.400644 1649559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 03:00:15.427881 1649559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34905 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 03:00:15.787187 1649559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:00:15.846908 1649559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 03:00:15.880065 1649559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:00:15.882055 1649559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:00:17.053304 1649559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266033736s)
	I1119 03:00:17.053359 1649559 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.206361734s)
	I1119 03:00:17.053370 1649559 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 03:00:17.054427 1649559 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.174287951s)
	I1119 03:00:17.055056 1649559 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-579203" to be "Ready" ...
	I1119 03:00:17.055290 1649559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.173166392s)
	I1119 03:00:17.123729 1649559 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 03:00:17.389219 1651562 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 03:00:17.712816 1651562 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 03:00:17.714408 1651562 kubeadm.go:319] 
	I1119 03:00:17.714483 1651562 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 03:00:17.714490 1651562 kubeadm.go:319] 
	I1119 03:00:17.714570 1651562 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 03:00:17.714575 1651562 kubeadm.go:319] 
	I1119 03:00:17.714601 1651562 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 03:00:17.715081 1651562 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 03:00:17.715141 1651562 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 03:00:17.715146 1651562 kubeadm.go:319] 
	I1119 03:00:17.715203 1651562 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 03:00:17.715208 1651562 kubeadm.go:319] 
	I1119 03:00:17.715258 1651562 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 03:00:17.715263 1651562 kubeadm.go:319] 
	I1119 03:00:17.715317 1651562 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 03:00:17.715396 1651562 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 03:00:17.715467 1651562 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 03:00:17.715472 1651562 kubeadm.go:319] 
	I1119 03:00:17.715757 1651562 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 03:00:17.715844 1651562 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 03:00:17.715848 1651562 kubeadm.go:319] 
	I1119 03:00:17.716138 1651562 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token madf65.z1gbue97bfudhybf \
	I1119 03:00:17.716252 1651562 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a \
	I1119 03:00:17.716445 1651562 kubeadm.go:319] 	--control-plane 
	I1119 03:00:17.716455 1651562 kubeadm.go:319] 
	I1119 03:00:17.716740 1651562 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 03:00:17.716750 1651562 kubeadm.go:319] 
	I1119 03:00:17.717026 1651562 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token madf65.z1gbue97bfudhybf \
	I1119 03:00:17.717318 1651562 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a 
	I1119 03:00:17.722376 1651562 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 03:00:17.722609 1651562 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 03:00:17.722736 1651562 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 03:00:17.722752 1651562 cni.go:84] Creating CNI manager for ""
	I1119 03:00:17.722760 1651562 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:00:17.727760 1651562 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 03:00:17.126627 1649559 addons.go:515] duration metric: took 1.815565892s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 03:00:17.559346 1649559 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-579203" context rescaled to 1 replicas
	I1119 03:00:17.731105 1651562 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 03:00:17.736065 1651562 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 03:00:17.736083 1651562 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 03:00:17.751377 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 03:00:18.075216 1651562 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 03:00:18.075358 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:18.075427 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-592123 minikube.k8s.io/updated_at=2025_11_19T03_00_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=embed-certs-592123 minikube.k8s.io/primary=true
	I1119 03:00:18.231325 1651562 ops.go:34] apiserver oom_adj: -16
	I1119 03:00:18.231441 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:18.732279 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:19.231764 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:19.731559 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:20.232009 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:20.731554 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:21.232258 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:21.731550 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:22.231638 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:22.732352 1651562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:00:22.833366 1651562 kubeadm.go:1114] duration metric: took 4.758053785s to wait for elevateKubeSystemPrivileges
	I1119 03:00:22.833391 1651562 kubeadm.go:403] duration metric: took 25.614114647s to StartCluster
	I1119 03:00:22.833408 1651562 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:00:22.833467 1651562 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:00:22.834852 1651562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:00:22.835086 1651562 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:00:22.835235 1651562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 03:00:22.835504 1651562 config.go:182] Loaded profile config "embed-certs-592123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:00:22.835536 1651562 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:00:22.835598 1651562 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-592123"
	I1119 03:00:22.835612 1651562 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-592123"
	I1119 03:00:22.835632 1651562 host.go:66] Checking if "embed-certs-592123" exists ...
	I1119 03:00:22.836107 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:00:22.836802 1651562 addons.go:70] Setting default-storageclass=true in profile "embed-certs-592123"
	I1119 03:00:22.836840 1651562 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-592123"
	I1119 03:00:22.837116 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:00:22.838605 1651562 out.go:179] * Verifying Kubernetes components...
	I1119 03:00:22.842092 1651562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:00:22.873658 1651562 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1119 03:00:19.058082 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:21.061636 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:23.558675 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	I1119 03:00:22.876582 1651562 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:00:22.876604 1651562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:00:22.876674 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:00:22.884255 1651562 addons.go:239] Setting addon default-storageclass=true in "embed-certs-592123"
	I1119 03:00:22.884327 1651562 host.go:66] Checking if "embed-certs-592123" exists ...
	I1119 03:00:22.885985 1651562 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:00:22.913811 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:00:22.933093 1651562 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:00:22.933122 1651562 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:00:22.933182 1651562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:00:22.958831 1651562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34910 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:00:23.184571 1651562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:00:23.257551 1651562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:00:23.270820 1651562 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:00:23.271076 1651562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 03:00:24.103295 1651562 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 03:00:24.106291 1651562 node_ready.go:35] waiting up to 6m0s for node "embed-certs-592123" to be "Ready" ...
	I1119 03:00:24.109606 1651562 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 03:00:24.112563 1651562 addons.go:515] duration metric: took 1.277008933s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1119 03:00:24.607361 1651562 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-592123" context rescaled to 1 replicas
	W1119 03:00:26.109137 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:25.561925 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:28.058132 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:28.109533 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:30.109686 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:30.061791 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:32.557783 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:32.609789 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:34.610128 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:34.557881 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:36.558049 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:37.110116 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:39.615132 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:39.058085 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:41.058520 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:43.558701 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:42.112316 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:44.610111 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:46.058147 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:48.557815 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:47.109269 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:49.610341 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:50.558720 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:53.058419 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	W1119 03:00:52.109800 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:54.609765 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:55.060314 1649559 node_ready.go:57] node "default-k8s-diff-port-579203" has "Ready":"False" status (will retry)
	I1119 03:00:57.058436 1649559 node_ready.go:49] node "default-k8s-diff-port-579203" is "Ready"
	I1119 03:00:57.058467 1649559 node_ready.go:38] duration metric: took 40.003385948s for node "default-k8s-diff-port-579203" to be "Ready" ...
	I1119 03:00:57.058481 1649559 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:00:57.058546 1649559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:00:57.070588 1649559 api_server.go:72] duration metric: took 41.758591784s to wait for apiserver process to appear ...
	I1119 03:00:57.070611 1649559 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:00:57.070629 1649559 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1119 03:00:57.080494 1649559 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1119 03:00:57.081925 1649559 api_server.go:141] control plane version: v1.34.1
	I1119 03:00:57.081953 1649559 api_server.go:131] duration metric: took 11.335422ms to wait for apiserver health ...
	I1119 03:00:57.081963 1649559 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:00:57.085138 1649559 system_pods.go:59] 8 kube-system pods found
	I1119 03:00:57.085182 1649559 system_pods.go:61] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:00:57.085190 1649559 system_pods.go:61] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running
	I1119 03:00:57.085197 1649559 system_pods.go:61] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:00:57.085201 1649559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running
	I1119 03:00:57.085207 1649559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running
	I1119 03:00:57.085213 1649559 system_pods.go:61] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:00:57.085218 1649559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running
	I1119 03:00:57.085224 1649559 system_pods.go:61] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:00:57.085234 1649559 system_pods.go:74] duration metric: took 3.264448ms to wait for pod list to return data ...
	I1119 03:00:57.085247 1649559 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:00:57.087946 1649559 default_sa.go:45] found service account: "default"
	I1119 03:00:57.087975 1649559 default_sa.go:55] duration metric: took 2.720103ms for default service account to be created ...
	I1119 03:00:57.087985 1649559 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 03:00:57.091006 1649559 system_pods.go:86] 8 kube-system pods found
	I1119 03:00:57.091052 1649559 system_pods.go:89] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:00:57.091059 1649559 system_pods.go:89] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running
	I1119 03:00:57.091067 1649559 system_pods.go:89] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:00:57.091072 1649559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running
	I1119 03:00:57.091077 1649559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running
	I1119 03:00:57.091082 1649559 system_pods.go:89] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:00:57.091086 1649559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running
	I1119 03:00:57.091092 1649559 system_pods.go:89] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:00:57.091117 1649559 retry.go:31] will retry after 302.420543ms: missing components: kube-dns
	I1119 03:00:57.398309 1649559 system_pods.go:86] 8 kube-system pods found
	I1119 03:00:57.398342 1649559 system_pods.go:89] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:00:57.398350 1649559 system_pods.go:89] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running
	I1119 03:00:57.398357 1649559 system_pods.go:89] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:00:57.398362 1649559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running
	I1119 03:00:57.398366 1649559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running
	I1119 03:00:57.398372 1649559 system_pods.go:89] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:00:57.398376 1649559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running
	I1119 03:00:57.398382 1649559 system_pods.go:89] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:00:57.398402 1649559 retry.go:31] will retry after 257.32747ms: missing components: kube-dns
	I1119 03:00:57.664889 1649559 system_pods.go:86] 8 kube-system pods found
	I1119 03:00:57.664919 1649559 system_pods.go:89] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:00:57.664927 1649559 system_pods.go:89] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running
	I1119 03:00:57.664933 1649559 system_pods.go:89] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:00:57.664938 1649559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running
	I1119 03:00:57.664942 1649559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running
	I1119 03:00:57.664946 1649559 system_pods.go:89] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:00:57.664950 1649559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running
	I1119 03:00:57.664956 1649559 system_pods.go:89] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:00:57.664974 1649559 retry.go:31] will retry after 356.664094ms: missing components: kube-dns
	I1119 03:00:58.026523 1649559 system_pods.go:86] 8 kube-system pods found
	I1119 03:00:58.026572 1649559 system_pods.go:89] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:00:58.026583 1649559 system_pods.go:89] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running
	I1119 03:00:58.026592 1649559 system_pods.go:89] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:00:58.026597 1649559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running
	I1119 03:00:58.026601 1649559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running
	I1119 03:00:58.026607 1649559 system_pods.go:89] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:00:58.026612 1649559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running
	I1119 03:00:58.026624 1649559 system_pods.go:89] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:00:58.026642 1649559 retry.go:31] will retry after 383.232625ms: missing components: kube-dns
	I1119 03:00:58.413261 1649559 system_pods.go:86] 8 kube-system pods found
	I1119 03:00:58.413294 1649559 system_pods.go:89] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Running
	I1119 03:00:58.413301 1649559 system_pods.go:89] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running
	I1119 03:00:58.413306 1649559 system_pods.go:89] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:00:58.413310 1649559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running
	I1119 03:00:58.413314 1649559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running
	I1119 03:00:58.413319 1649559 system_pods.go:89] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:00:58.413322 1649559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running
	I1119 03:00:58.413327 1649559 system_pods.go:89] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Running
	I1119 03:00:58.413334 1649559 system_pods.go:126] duration metric: took 1.325343399s to wait for k8s-apps to be running ...
	I1119 03:00:58.413345 1649559 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 03:00:58.413412 1649559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:00:58.425892 1649559 system_svc.go:56] duration metric: took 12.537012ms WaitForService to wait for kubelet
	I1119 03:00:58.425971 1649559 kubeadm.go:587] duration metric: took 43.113978853s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:00:58.426003 1649559 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:00:58.428918 1649559 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:00:58.428950 1649559 node_conditions.go:123] node cpu capacity is 2
	I1119 03:00:58.428963 1649559 node_conditions.go:105] duration metric: took 2.9476ms to run NodePressure ...
	I1119 03:00:58.428993 1649559 start.go:242] waiting for startup goroutines ...
	I1119 03:00:58.429007 1649559 start.go:247] waiting for cluster config update ...
	I1119 03:00:58.429019 1649559 start.go:256] writing updated cluster config ...
	I1119 03:00:58.429331 1649559 ssh_runner.go:195] Run: rm -f paused
	I1119 03:00:58.432902 1649559 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:00:58.436665 1649559 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pkngt" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.442062 1649559 pod_ready.go:94] pod "coredns-66bc5c9577-pkngt" is "Ready"
	I1119 03:00:58.442086 1649559 pod_ready.go:86] duration metric: took 5.386882ms for pod "coredns-66bc5c9577-pkngt" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.444419 1649559 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.448806 1649559 pod_ready.go:94] pod "etcd-default-k8s-diff-port-579203" is "Ready"
	I1119 03:00:58.448834 1649559 pod_ready.go:86] duration metric: took 4.39505ms for pod "etcd-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.451127 1649559 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.455594 1649559 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-579203" is "Ready"
	I1119 03:00:58.455619 1649559 pod_ready.go:86] duration metric: took 4.470084ms for pod "kube-apiserver-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.457927 1649559 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:58.837776 1649559 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-579203" is "Ready"
	I1119 03:00:58.837805 1649559 pod_ready.go:86] duration metric: took 379.853189ms for pod "kube-controller-manager-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:59.037546 1649559 pod_ready.go:83] waiting for pod "kube-proxy-7ncfq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:59.437539 1649559 pod_ready.go:94] pod "kube-proxy-7ncfq" is "Ready"
	I1119 03:00:59.437566 1649559 pod_ready.go:86] duration metric: took 399.953922ms for pod "kube-proxy-7ncfq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:00:59.638289 1649559 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:00.043762 1649559 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-579203" is "Ready"
	I1119 03:01:00.043871 1649559 pod_ready.go:86] duration metric: took 405.555944ms for pod "kube-scheduler-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:00.043902 1649559 pod_ready.go:40] duration metric: took 1.610970834s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:01:00.239287 1649559 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:01:00.247376 1649559 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-579203" cluster and "default" namespace by default
	W1119 03:00:57.109220 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:00:59.609331 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:01:01.609678 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	W1119 03:01:04.109974 1651562 node_ready.go:57] node "embed-certs-592123" has "Ready":"False" status (will retry)
	I1119 03:01:04.609470 1651562 node_ready.go:49] node "embed-certs-592123" is "Ready"
	I1119 03:01:04.609501 1651562 node_ready.go:38] duration metric: took 40.50317859s for node "embed-certs-592123" to be "Ready" ...
	I1119 03:01:04.609544 1651562 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:01:04.609604 1651562 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:01:04.624373 1651562 api_server.go:72] duration metric: took 41.789257238s to wait for apiserver process to appear ...
	I1119 03:01:04.624395 1651562 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:01:04.624413 1651562 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 03:01:04.637333 1651562 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 03:01:04.638550 1651562 api_server.go:141] control plane version: v1.34.1
	I1119 03:01:04.638578 1651562 api_server.go:131] duration metric: took 14.176177ms to wait for apiserver health ...
	I1119 03:01:04.638587 1651562 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:01:04.649289 1651562 system_pods.go:59] 8 kube-system pods found
	I1119 03:01:04.649329 1651562 system_pods.go:61] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:04.649336 1651562 system_pods.go:61] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running
	I1119 03:01:04.649342 1651562 system_pods.go:61] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:04.649348 1651562 system_pods.go:61] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:04.649353 1651562 system_pods.go:61] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running
	I1119 03:01:04.649359 1651562 system_pods.go:61] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:04.649364 1651562 system_pods.go:61] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running
	I1119 03:01:04.649376 1651562 system_pods.go:61] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:01:04.649385 1651562 system_pods.go:74] duration metric: took 10.79252ms to wait for pod list to return data ...
	I1119 03:01:04.649401 1651562 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:01:04.655345 1651562 default_sa.go:45] found service account: "default"
	I1119 03:01:04.655373 1651562 default_sa.go:55] duration metric: took 5.96476ms for default service account to be created ...
	I1119 03:01:04.655383 1651562 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 03:01:04.659448 1651562 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:04.659478 1651562 system_pods.go:89] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:04.659486 1651562 system_pods.go:89] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running
	I1119 03:01:04.659492 1651562 system_pods.go:89] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:04.659496 1651562 system_pods.go:89] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:04.659501 1651562 system_pods.go:89] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running
	I1119 03:01:04.659505 1651562 system_pods.go:89] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:04.659509 1651562 system_pods.go:89] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running
	I1119 03:01:04.659515 1651562 system_pods.go:89] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:01:04.659537 1651562 retry.go:31] will retry after 250.30161ms: missing components: kube-dns
	I1119 03:01:04.914554 1651562 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:04.914588 1651562 system_pods.go:89] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:04.914596 1651562 system_pods.go:89] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running
	I1119 03:01:04.914604 1651562 system_pods.go:89] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:04.914609 1651562 system_pods.go:89] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:04.914614 1651562 system_pods.go:89] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running
	I1119 03:01:04.914618 1651562 system_pods.go:89] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:04.914623 1651562 system_pods.go:89] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running
	I1119 03:01:04.914629 1651562 system_pods.go:89] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:01:04.914648 1651562 retry.go:31] will retry after 267.466957ms: missing components: kube-dns
	I1119 03:01:05.186184 1651562 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:05.186217 1651562 system_pods.go:89] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:05.186224 1651562 system_pods.go:89] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running
	I1119 03:01:05.186230 1651562 system_pods.go:89] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:05.186235 1651562 system_pods.go:89] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:05.186239 1651562 system_pods.go:89] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running
	I1119 03:01:05.186243 1651562 system_pods.go:89] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:05.186247 1651562 system_pods.go:89] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running
	I1119 03:01:05.186254 1651562 system_pods.go:89] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:01:05.186289 1651562 retry.go:31] will retry after 303.104661ms: missing components: kube-dns
	I1119 03:01:05.493468 1651562 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:05.493530 1651562 system_pods.go:89] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:05.493539 1651562 system_pods.go:89] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running
	I1119 03:01:05.493545 1651562 system_pods.go:89] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:05.493551 1651562 system_pods.go:89] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:05.493557 1651562 system_pods.go:89] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running
	I1119 03:01:05.493561 1651562 system_pods.go:89] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:05.493567 1651562 system_pods.go:89] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running
	I1119 03:01:05.493577 1651562 system_pods.go:89] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:01:05.493595 1651562 retry.go:31] will retry after 486.063624ms: missing components: kube-dns
	I1119 03:01:05.983968 1651562 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:05.984001 1651562 system_pods.go:89] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Running
	I1119 03:01:05.984008 1651562 system_pods.go:89] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running
	I1119 03:01:05.984012 1651562 system_pods.go:89] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:05.984017 1651562 system_pods.go:89] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:05.984023 1651562 system_pods.go:89] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running
	I1119 03:01:05.984027 1651562 system_pods.go:89] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:05.984031 1651562 system_pods.go:89] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running
	I1119 03:01:05.984035 1651562 system_pods.go:89] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Running
	I1119 03:01:05.984044 1651562 system_pods.go:126] duration metric: took 1.328654209s to wait for k8s-apps to be running ...
	I1119 03:01:05.984055 1651562 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 03:01:05.984111 1651562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:01:05.997239 1651562 system_svc.go:56] duration metric: took 13.173473ms WaitForService to wait for kubelet
	I1119 03:01:05.997320 1651562 kubeadm.go:587] duration metric: took 43.162208485s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:01:05.997363 1651562 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:01:06.000555 1651562 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:01:06.000590 1651562 node_conditions.go:123] node cpu capacity is 2
	I1119 03:01:06.000612 1651562 node_conditions.go:105] duration metric: took 3.230136ms to run NodePressure ...
	I1119 03:01:06.000625 1651562 start.go:242] waiting for startup goroutines ...
	I1119 03:01:06.000633 1651562 start.go:247] waiting for cluster config update ...
	I1119 03:01:06.000644 1651562 start.go:256] writing updated cluster config ...
	I1119 03:01:06.000943 1651562 ssh_runner.go:195] Run: rm -f paused
	I1119 03:01:06.007452 1651562 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:01:06.083960 1651562 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vtc44" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.088812 1651562 pod_ready.go:94] pod "coredns-66bc5c9577-vtc44" is "Ready"
	I1119 03:01:06.088842 1651562 pod_ready.go:86] duration metric: took 4.858954ms for pod "coredns-66bc5c9577-vtc44" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.091253 1651562 pod_ready.go:83] waiting for pod "etcd-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.096278 1651562 pod_ready.go:94] pod "etcd-embed-certs-592123" is "Ready"
	I1119 03:01:06.096307 1651562 pod_ready.go:86] duration metric: took 5.028106ms for pod "etcd-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.098875 1651562 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.103481 1651562 pod_ready.go:94] pod "kube-apiserver-embed-certs-592123" is "Ready"
	I1119 03:01:06.103508 1651562 pod_ready.go:86] duration metric: took 4.560067ms for pod "kube-apiserver-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.106053 1651562 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.411948 1651562 pod_ready.go:94] pod "kube-controller-manager-embed-certs-592123" is "Ready"
	I1119 03:01:06.411978 1651562 pod_ready.go:86] duration metric: took 305.893932ms for pod "kube-controller-manager-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:06.612144 1651562 pod_ready.go:83] waiting for pod "kube-proxy-55pcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:07.011553 1651562 pod_ready.go:94] pod "kube-proxy-55pcf" is "Ready"
	I1119 03:01:07.011582 1651562 pod_ready.go:86] duration metric: took 399.359353ms for pod "kube-proxy-55pcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:07.211682 1651562 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:07.611730 1651562 pod_ready.go:94] pod "kube-scheduler-embed-certs-592123" is "Ready"
	I1119 03:01:07.611757 1651562 pod_ready.go:86] duration metric: took 400.048918ms for pod "kube-scheduler-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:01:07.611770 1651562 pod_ready.go:40] duration metric: took 1.604283109s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:01:07.675431 1651562 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:01:07.678607 1651562 out.go:179] * Done! kubectl is now configured to use "embed-certs-592123" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 03:01:04 embed-certs-592123 crio[844]: time="2025-11-19T03:01:04.686594309Z" level=info msg="Created container 10f2b5284b1d01279beac38e6573ccf1401a49cb80208c0f5fa00d8f7d1521a9: kube-system/coredns-66bc5c9577-vtc44/coredns" id=ef2e635f-4cb7-4281-b581-356dcab4e48c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:01:04 embed-certs-592123 crio[844]: time="2025-11-19T03:01:04.687792075Z" level=info msg="Starting container: 10f2b5284b1d01279beac38e6573ccf1401a49cb80208c0f5fa00d8f7d1521a9" id=c2e2064b-844d-4ae6-836a-3f32d4ecd3b4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:01:04 embed-certs-592123 crio[844]: time="2025-11-19T03:01:04.697073008Z" level=info msg="Started container" PID=1741 containerID=10f2b5284b1d01279beac38e6573ccf1401a49cb80208c0f5fa00d8f7d1521a9 description=kube-system/coredns-66bc5c9577-vtc44/coredns id=c2e2064b-844d-4ae6-836a-3f32d4ecd3b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7e514840273b0c3560fb051070b7aae64fd5eb267b01f6a94100b352a419ba6f
	Nov 19 03:01:08 embed-certs-592123 crio[844]: time="2025-11-19T03:01:08.195338693Z" level=info msg="Running pod sandbox: default/busybox/POD" id=98727390-2039-4510-ba15-18687260d66b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:01:08 embed-certs-592123 crio[844]: time="2025-11-19T03:01:08.195421004Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:01:08 embed-certs-592123 crio[844]: time="2025-11-19T03:01:08.201698929Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cd8b9688de7dfda36112c07b45b2b09ddc902b29cd4eb444aa6372e44cd42048 UID:1bb0ae41-6818-4b9f-bacc-21d0feb4f909 NetNS:/var/run/netns/a2c36862-4362-4824-80f7-dac1bf62640d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049f640}] Aliases:map[]}"
	Nov 19 03:01:08 embed-certs-592123 crio[844]: time="2025-11-19T03:01:08.201860385Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 03:01:08 embed-certs-592123 crio[844]: time="2025-11-19T03:01:08.216223896Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cd8b9688de7dfda36112c07b45b2b09ddc902b29cd4eb444aa6372e44cd42048 UID:1bb0ae41-6818-4b9f-bacc-21d0feb4f909 NetNS:/var/run/netns/a2c36862-4362-4824-80f7-dac1bf62640d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049f640}] Aliases:map[]}"
	Nov 19 03:01:08 embed-certs-592123 crio[844]: time="2025-11-19T03:01:08.216386616Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 03:01:08 embed-certs-592123 crio[844]: time="2025-11-19T03:01:08.219477719Z" level=info msg="Ran pod sandbox cd8b9688de7dfda36112c07b45b2b09ddc902b29cd4eb444aa6372e44cd42048 with infra container: default/busybox/POD" id=98727390-2039-4510-ba15-18687260d66b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:01:08 embed-certs-592123 crio[844]: time="2025-11-19T03:01:08.221123424Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=08f1cd7a-3436-420e-9b03-93d8af406d7f name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:01:08 embed-certs-592123 crio[844]: time="2025-11-19T03:01:08.221243708Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=08f1cd7a-3436-420e-9b03-93d8af406d7f name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:01:08 embed-certs-592123 crio[844]: time="2025-11-19T03:01:08.221279424Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=08f1cd7a-3436-420e-9b03-93d8af406d7f name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:01:08 embed-certs-592123 crio[844]: time="2025-11-19T03:01:08.22483764Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=67d9da17-80ad-449b-85d4-737b061c24c5 name=/runtime.v1.ImageService/PullImage
	Nov 19 03:01:08 embed-certs-592123 crio[844]: time="2025-11-19T03:01:08.226892539Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 03:01:10 embed-certs-592123 crio[844]: time="2025-11-19T03:01:10.428471706Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=67d9da17-80ad-449b-85d4-737b061c24c5 name=/runtime.v1.ImageService/PullImage
	Nov 19 03:01:10 embed-certs-592123 crio[844]: time="2025-11-19T03:01:10.429579006Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a3f723bc-2ae5-4546-8d30-5c7476d7dfc8 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:01:10 embed-certs-592123 crio[844]: time="2025-11-19T03:01:10.432476966Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8ba10050-ebc6-4c6d-838f-c5aa75779f00 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:01:10 embed-certs-592123 crio[844]: time="2025-11-19T03:01:10.439266056Z" level=info msg="Creating container: default/busybox/busybox" id=50dabc04-b423-4eb6-a554-bf5fdb4e5443 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:01:10 embed-certs-592123 crio[844]: time="2025-11-19T03:01:10.439533199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:01:10 embed-certs-592123 crio[844]: time="2025-11-19T03:01:10.449077763Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:01:10 embed-certs-592123 crio[844]: time="2025-11-19T03:01:10.450400138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:01:10 embed-certs-592123 crio[844]: time="2025-11-19T03:01:10.466893451Z" level=info msg="Created container 8a2b70e975d990fab0d347977456c42da64bbb090ae9f31bbf03ed291d888b64: default/busybox/busybox" id=50dabc04-b423-4eb6-a554-bf5fdb4e5443 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:01:10 embed-certs-592123 crio[844]: time="2025-11-19T03:01:10.467995844Z" level=info msg="Starting container: 8a2b70e975d990fab0d347977456c42da64bbb090ae9f31bbf03ed291d888b64" id=1ff96000-1698-4eb1-9f52-131c345ea3fd name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:01:10 embed-certs-592123 crio[844]: time="2025-11-19T03:01:10.471185701Z" level=info msg="Started container" PID=1796 containerID=8a2b70e975d990fab0d347977456c42da64bbb090ae9f31bbf03ed291d888b64 description=default/busybox/busybox id=1ff96000-1698-4eb1-9f52-131c345ea3fd name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd8b9688de7dfda36112c07b45b2b09ddc902b29cd4eb444aa6372e44cd42048
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	8a2b70e975d99       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   cd8b9688de7df       busybox                                      default
	10f2b5284b1d0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   7e514840273b0       coredns-66bc5c9577-vtc44                     kube-system
	af9d481744a8a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   9a638c5113e9a       storage-provisioner                          kube-system
	58c061124026f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   9b1b7fe868503       kindnet-sv99p                                kube-system
	cf6b21bc42cbf       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   4010c06a4e225       kube-proxy-55pcf                             kube-system
	ce3564edc3019       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   552bef2add271       kube-controller-manager-embed-certs-592123   kube-system
	6119dc5ad5457       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   0a167e6ed08a5       etcd-embed-certs-592123                      kube-system
	31c6263d6e927       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   761a99f77d969       kube-scheduler-embed-certs-592123            kube-system
	4439d4f251376       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   fd04cd6b96da1       kube-apiserver-embed-certs-592123            kube-system
	
	
	==> coredns [10f2b5284b1d01279beac38e6573ccf1401a49cb80208c0f5fa00d8f7d1521a9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34373 - 53738 "HINFO IN 5910015454141169151.1405003488614238433. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01552461s
	
	
	==> describe nodes <==
	Name:               embed-certs-592123
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-592123
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=embed-certs-592123
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T03_00_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 03:00:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-592123
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 03:01:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 03:01:04 +0000   Wed, 19 Nov 2025 03:00:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 03:01:04 +0000   Wed, 19 Nov 2025 03:00:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 03:01:04 +0000   Wed, 19 Nov 2025 03:00:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 03:01:04 +0000   Wed, 19 Nov 2025 03:01:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-592123
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                c8c3e11e-b7bd-48ff-908e-852c6643928c
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-vtc44                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-592123                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         64s
	  kube-system                 kindnet-sv99p                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-592123             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-592123    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-55pcf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-592123             100m (5%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 70s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node embed-certs-592123 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node embed-certs-592123 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)  kubelet          Node embed-certs-592123 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-592123 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-592123 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-592123 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-592123 event: Registered Node embed-certs-592123 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-592123 status is now: NodeReady
	
	
	==> dmesg <==
	[ +37.747558] overlayfs: idmapped layers are currently not supported
	[Nov19 02:37] overlayfs: idmapped layers are currently not supported
	[Nov19 02:38] overlayfs: idmapped layers are currently not supported
	[Nov19 02:39] overlayfs: idmapped layers are currently not supported
	[Nov19 02:41] overlayfs: idmapped layers are currently not supported
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	[Nov19 03:00] overlayfs: idmapped layers are currently not supported
	[  +8.385032] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6119dc5ad545757542ffb53d7d487d37e1a12e654f0a3c4c39a507801c88b1ad] <==
	{"level":"warn","ts":"2025-11-19T03:00:12.021938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.086266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.135160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.200682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.261161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.317280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.395422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.425597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.440010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.470262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.496996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.513328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.526412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.559977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.569834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.583546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.605271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.625232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.652467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.655268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.682237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.735863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.805195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.815999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:00:12.902562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39714","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:01:18 up 10:43,  0 user,  load average: 2.33, 2.93, 2.54
	Linux embed-certs-592123 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [58c061124026fb194ec33ee90ee666acc2b37a4f4115388e8d620b1068787500] <==
	I1119 03:00:23.555098       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 03:00:23.557438       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 03:00:23.557586       1 main.go:148] setting mtu 1500 for CNI 
	I1119 03:00:23.557600       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 03:00:23.557611       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T03:00:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 03:00:23.746920       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 03:00:23.746939       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 03:00:23.746947       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 03:00:23.747059       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 03:00:53.747469       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1119 03:00:53.747558       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 03:00:53.747468       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 03:00:53.747698       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1119 03:00:55.247691       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 03:00:55.247730       1 metrics.go:72] Registering metrics
	I1119 03:00:55.247784       1 controller.go:711] "Syncing nftables rules"
	I1119 03:01:03.749692       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 03:01:03.749748       1 main.go:301] handling current node
	I1119 03:01:13.745577       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 03:01:13.745687       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4439d4f251376040234cc726bab1dc9bd7b2210440643f564ef2f6437d0d4ebc] <==
	I1119 03:00:14.194591       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:00:14.198785       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1119 03:00:14.202782       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1119 03:00:14.218686       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1119 03:00:14.228917       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:00:14.229009       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 03:00:14.446287       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 03:00:14.691318       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 03:00:14.717339       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 03:00:14.717360       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 03:00:15.836866       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 03:00:15.916523       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 03:00:16.003762       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 03:00:16.015075       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 03:00:16.016817       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 03:00:16.027652       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 03:00:16.946712       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 03:00:17.339827       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 03:00:17.383022       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 03:00:17.401924       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 03:00:22.575399       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 03:00:22.727311       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:00:22.740126       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:00:22.908817       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1119 03:01:17.019728       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:59928: use of closed network connection
	
	
	==> kube-controller-manager [ce3564edc301912056228c71c61026828ed2c13d01ed88960702a311c86d5445] <==
	I1119 03:00:21.926456       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 03:00:21.927642       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 03:00:21.929903       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:00:21.933034       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 03:00:21.940282       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:00:21.943499       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 03:00:21.943562       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 03:00:21.943647       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 03:00:21.943753       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-592123"
	I1119 03:00:21.943797       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 03:00:21.962109       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:00:21.968447       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 03:00:21.968548       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 03:00:21.969646       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 03:00:21.970822       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 03:00:21.971034       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 03:00:21.971091       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 03:00:21.971038       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 03:00:21.971117       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 03:00:21.973012       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 03:00:21.974566       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 03:00:21.976743       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 03:00:21.981302       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 03:00:21.981310       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 03:01:06.950604       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cf6b21bc42cbffd3b5c6dbb1bbe35c80ed4c4a40731ce857b402e05ea4042d29] <==
	I1119 03:00:23.578922       1 server_linux.go:53] "Using iptables proxy"
	I1119 03:00:23.827238       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 03:00:23.927690       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 03:00:23.927732       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 03:00:23.927798       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 03:00:23.948187       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 03:00:23.948393       1 server_linux.go:132] "Using iptables Proxier"
	I1119 03:00:23.952330       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 03:00:23.952757       1 server.go:527] "Version info" version="v1.34.1"
	I1119 03:00:23.952921       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:00:23.954859       1 config.go:200] "Starting service config controller"
	I1119 03:00:23.954924       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 03:00:23.954966       1 config.go:106] "Starting endpoint slice config controller"
	I1119 03:00:23.954991       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 03:00:23.955028       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 03:00:23.955055       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 03:00:23.955713       1 config.go:309] "Starting node config controller"
	I1119 03:00:23.958116       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 03:00:23.958172       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 03:00:24.055377       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 03:00:24.061014       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 03:00:24.061097       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [31c6263d6e927af0ec0f4120a14e4ef4ff1d73153962ebae6ab09001b77d19b8] <==
	I1119 03:00:15.088724       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:00:15.093819       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:00:15.094279       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:00:15.095206       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 03:00:15.095281       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 03:00:15.105936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 03:00:15.106151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 03:00:15.106241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 03:00:15.106474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 03:00:15.106583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 03:00:15.106705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 03:00:15.106784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 03:00:15.106946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 03:00:15.107085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 03:00:15.107168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 03:00:15.107252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 03:00:15.107552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 03:00:15.107708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 03:00:15.108685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 03:00:15.108792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 03:00:15.108923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 03:00:15.109339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 03:00:15.109398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 03:00:15.109489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1119 03:00:16.194422       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 03:00:18 embed-certs-592123 kubelet[1303]: I1119 03:00:18.587763    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-592123" podStartSLOduration=1.587746257 podStartE2EDuration="1.587746257s" podCreationTimestamp="2025-11-19 03:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:00:18.575264611 +0000 UTC m=+1.349631618" watchObservedRunningTime="2025-11-19 03:00:18.587746257 +0000 UTC m=+1.362113264"
	Nov 19 03:00:21 embed-certs-592123 kubelet[1303]: I1119 03:00:21.992521    1303 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 03:00:21 embed-certs-592123 kubelet[1303]: I1119 03:00:21.993163    1303 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 03:00:23 embed-certs-592123 kubelet[1303]: I1119 03:00:23.071557    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d001372-9066-4ffc-a2f5-1f51e988cb2a-xtables-lock\") pod \"kube-proxy-55pcf\" (UID: \"5d001372-9066-4ffc-a2f5-1f51e988cb2a\") " pod="kube-system/kube-proxy-55pcf"
	Nov 19 03:00:23 embed-certs-592123 kubelet[1303]: I1119 03:00:23.071603    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30531f66-1993-4675-a8a7-c88fbd84c7e0-xtables-lock\") pod \"kindnet-sv99p\" (UID: \"30531f66-1993-4675-a8a7-c88fbd84c7e0\") " pod="kube-system/kindnet-sv99p"
	Nov 19 03:00:23 embed-certs-592123 kubelet[1303]: I1119 03:00:23.071625    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d001372-9066-4ffc-a2f5-1f51e988cb2a-lib-modules\") pod \"kube-proxy-55pcf\" (UID: \"5d001372-9066-4ffc-a2f5-1f51e988cb2a\") " pod="kube-system/kube-proxy-55pcf"
	Nov 19 03:00:23 embed-certs-592123 kubelet[1303]: I1119 03:00:23.071640    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30531f66-1993-4675-a8a7-c88fbd84c7e0-lib-modules\") pod \"kindnet-sv99p\" (UID: \"30531f66-1993-4675-a8a7-c88fbd84c7e0\") " pod="kube-system/kindnet-sv99p"
	Nov 19 03:00:23 embed-certs-592123 kubelet[1303]: I1119 03:00:23.071659    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fksfn\" (UniqueName: \"kubernetes.io/projected/30531f66-1993-4675-a8a7-c88fbd84c7e0-kube-api-access-fksfn\") pod \"kindnet-sv99p\" (UID: \"30531f66-1993-4675-a8a7-c88fbd84c7e0\") " pod="kube-system/kindnet-sv99p"
	Nov 19 03:00:23 embed-certs-592123 kubelet[1303]: I1119 03:00:23.071682    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d001372-9066-4ffc-a2f5-1f51e988cb2a-kube-proxy\") pod \"kube-proxy-55pcf\" (UID: \"5d001372-9066-4ffc-a2f5-1f51e988cb2a\") " pod="kube-system/kube-proxy-55pcf"
	Nov 19 03:00:23 embed-certs-592123 kubelet[1303]: I1119 03:00:23.071703    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqqx8\" (UniqueName: \"kubernetes.io/projected/5d001372-9066-4ffc-a2f5-1f51e988cb2a-kube-api-access-qqqx8\") pod \"kube-proxy-55pcf\" (UID: \"5d001372-9066-4ffc-a2f5-1f51e988cb2a\") " pod="kube-system/kube-proxy-55pcf"
	Nov 19 03:00:23 embed-certs-592123 kubelet[1303]: I1119 03:00:23.071725    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/30531f66-1993-4675-a8a7-c88fbd84c7e0-cni-cfg\") pod \"kindnet-sv99p\" (UID: \"30531f66-1993-4675-a8a7-c88fbd84c7e0\") " pod="kube-system/kindnet-sv99p"
	Nov 19 03:00:23 embed-certs-592123 kubelet[1303]: I1119 03:00:23.240004    1303 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 03:00:23 embed-certs-592123 kubelet[1303]: W1119 03:00:23.350752    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/crio-4010c06a4e225516bd8df6d8b1478b44fb8547e753aff73b28bb4481db6a6e58 WatchSource:0}: Error finding container 4010c06a4e225516bd8df6d8b1478b44fb8547e753aff73b28bb4481db6a6e58: Status 404 returned error can't find the container with id 4010c06a4e225516bd8df6d8b1478b44fb8547e753aff73b28bb4481db6a6e58
	Nov 19 03:00:23 embed-certs-592123 kubelet[1303]: I1119 03:00:23.700593    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-55pcf" podStartSLOduration=1.700578908 podStartE2EDuration="1.700578908s" podCreationTimestamp="2025-11-19 03:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:00:23.700356063 +0000 UTC m=+6.474723078" watchObservedRunningTime="2025-11-19 03:00:23.700578908 +0000 UTC m=+6.474945914"
	Nov 19 03:00:23 embed-certs-592123 kubelet[1303]: I1119 03:00:23.700881    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-sv99p" podStartSLOduration=1.700872323 podStartE2EDuration="1.700872323s" podCreationTimestamp="2025-11-19 03:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:00:23.626485345 +0000 UTC m=+6.400852352" watchObservedRunningTime="2025-11-19 03:00:23.700872323 +0000 UTC m=+6.475239362"
	Nov 19 03:01:04 embed-certs-592123 kubelet[1303]: I1119 03:01:04.229431    1303 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 03:01:04 embed-certs-592123 kubelet[1303]: I1119 03:01:04.393198    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/34c0ebbf-6c58-4d0b-94de-dbfcf04b254d-tmp\") pod \"storage-provisioner\" (UID: \"34c0ebbf-6c58-4d0b-94de-dbfcf04b254d\") " pod="kube-system/storage-provisioner"
	Nov 19 03:01:04 embed-certs-592123 kubelet[1303]: I1119 03:01:04.393254    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsr6b\" (UniqueName: \"kubernetes.io/projected/34c0ebbf-6c58-4d0b-94de-dbfcf04b254d-kube-api-access-gsr6b\") pod \"storage-provisioner\" (UID: \"34c0ebbf-6c58-4d0b-94de-dbfcf04b254d\") " pod="kube-system/storage-provisioner"
	Nov 19 03:01:04 embed-certs-592123 kubelet[1303]: I1119 03:01:04.393278    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e3bd982-5dec-4b41-97a5-feea8996184f-config-volume\") pod \"coredns-66bc5c9577-vtc44\" (UID: \"5e3bd982-5dec-4b41-97a5-feea8996184f\") " pod="kube-system/coredns-66bc5c9577-vtc44"
	Nov 19 03:01:04 embed-certs-592123 kubelet[1303]: I1119 03:01:04.393301    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69x5n\" (UniqueName: \"kubernetes.io/projected/5e3bd982-5dec-4b41-97a5-feea8996184f-kube-api-access-69x5n\") pod \"coredns-66bc5c9577-vtc44\" (UID: \"5e3bd982-5dec-4b41-97a5-feea8996184f\") " pod="kube-system/coredns-66bc5c9577-vtc44"
	Nov 19 03:01:04 embed-certs-592123 kubelet[1303]: W1119 03:01:04.582330    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/crio-9a638c5113e9ab0c04cb9f770e8fa8896dbe677d8fe2fc709e1b696648a0f478 WatchSource:0}: Error finding container 9a638c5113e9ab0c04cb9f770e8fa8896dbe677d8fe2fc709e1b696648a0f478: Status 404 returned error can't find the container with id 9a638c5113e9ab0c04cb9f770e8fa8896dbe677d8fe2fc709e1b696648a0f478
	Nov 19 03:01:04 embed-certs-592123 kubelet[1303]: W1119 03:01:04.621075    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/crio-7e514840273b0c3560fb051070b7aae64fd5eb267b01f6a94100b352a419ba6f WatchSource:0}: Error finding container 7e514840273b0c3560fb051070b7aae64fd5eb267b01f6a94100b352a419ba6f: Status 404 returned error can't find the container with id 7e514840273b0c3560fb051070b7aae64fd5eb267b01f6a94100b352a419ba6f
	Nov 19 03:01:05 embed-certs-592123 kubelet[1303]: I1119 03:01:05.696087    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vtc44" podStartSLOduration=42.696060032 podStartE2EDuration="42.696060032s" podCreationTimestamp="2025-11-19 03:00:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:01:05.680576955 +0000 UTC m=+48.454943970" watchObservedRunningTime="2025-11-19 03:01:05.696060032 +0000 UTC m=+48.470427047"
	Nov 19 03:01:05 embed-certs-592123 kubelet[1303]: I1119 03:01:05.713140    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.713119926 podStartE2EDuration="41.713119926s" podCreationTimestamp="2025-11-19 03:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:01:05.697044775 +0000 UTC m=+48.471411782" watchObservedRunningTime="2025-11-19 03:01:05.713119926 +0000 UTC m=+48.487486941"
	Nov 19 03:01:08 embed-certs-592123 kubelet[1303]: I1119 03:01:08.020070    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jktpj\" (UniqueName: \"kubernetes.io/projected/1bb0ae41-6818-4b9f-bacc-21d0feb4f909-kube-api-access-jktpj\") pod \"busybox\" (UID: \"1bb0ae41-6818-4b9f-bacc-21d0feb4f909\") " pod="default/busybox"
	
	
	==> storage-provisioner [af9d481744a8acec2c90fdd2ea61585cd1d09a8bf6dbd6a42f8c9926a9bcaa78] <==
	I1119 03:01:04.680806       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 03:01:04.700423       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 03:01:04.700590       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 03:01:04.703938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:04.713563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:01:04.713955       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 03:01:04.716296       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18542100-311c-4ccc-932d-a0e1133b54bb", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-592123_75c2cdb9-a8d6-4ab1-bc5e-c5d66c7618ee became leader
	I1119 03:01:04.716503       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-592123_75c2cdb9-a8d6-4ab1-bc5e-c5d66c7618ee!
	W1119 03:01:04.743951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:04.762411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:01:04.817276       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-592123_75c2cdb9-a8d6-4ab1-bc5e-c5d66c7618ee!
	W1119 03:01:06.765783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:06.772675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:08.775665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:08.780130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:10.783420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:10.790513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:12.794559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:12.802844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:14.805805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:14.810172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:16.812741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:16.817298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:18.819991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:01:18.834931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-592123 -n embed-certs-592123
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-592123 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-579203 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-579203 --alsologtostderr -v=1: exit status 80 (1.825146088s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-579203 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 03:02:26.092768 1661198 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:02:26.092903 1661198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:02:26.092914 1661198 out.go:374] Setting ErrFile to fd 2...
	I1119 03:02:26.092919 1661198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:02:26.093222 1661198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:02:26.093560 1661198 out.go:368] Setting JSON to false
	I1119 03:02:26.093588 1661198 mustload.go:66] Loading cluster: default-k8s-diff-port-579203
	I1119 03:02:26.093997 1661198 config.go:182] Loaded profile config "default-k8s-diff-port-579203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:02:26.094542 1661198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:02:26.112243 1661198 host.go:66] Checking if "default-k8s-diff-port-579203" exists ...
	I1119 03:02:26.112551 1661198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:02:26.175901 1661198 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 03:02:26.166496465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:02:26.176537 1661198 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-579203 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 03:02:26.180292 1661198 out.go:179] * Pausing node default-k8s-diff-port-579203 ... 
	I1119 03:02:26.184009 1661198 host.go:66] Checking if "default-k8s-diff-port-579203" exists ...
	I1119 03:02:26.184362 1661198 ssh_runner.go:195] Run: systemctl --version
	I1119 03:02:26.184421 1661198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 03:02:26.203205 1661198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34915 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 03:02:26.303943 1661198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:02:26.322882 1661198 pause.go:52] kubelet running: true
	I1119 03:02:26.322949 1661198 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 03:02:26.593236 1661198 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 03:02:26.593329 1661198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 03:02:26.673982 1661198 cri.go:89] found id: "dc2899265d6b0abb8bb121734cf5340b1c1e2eaaeacba61cd625f0fd5849b46a"
	I1119 03:02:26.674000 1661198 cri.go:89] found id: "cc12ce0d09f2f1fda420bf8fe3582af2e4d897fbce86ad179d3548f3c7dd46f7"
	I1119 03:02:26.674005 1661198 cri.go:89] found id: "36dc12556790ec62ebafc51adfeddf981db6efc365694b45844fc58332452d44"
	I1119 03:02:26.674009 1661198 cri.go:89] found id: "3ea7de269e8e6d7b9b64192a351808f1a03a33517868461ef84dc108d46883a5"
	I1119 03:02:26.674013 1661198 cri.go:89] found id: "717bbd5246f66b2cc923d8f5ba5038836144f1b64ec4fff2f37f5caf1afef446"
	I1119 03:02:26.674017 1661198 cri.go:89] found id: "4516831cebdb2595b82a89b6272a4678df2d23122cf9ae52b8b5ae44bd439756"
	I1119 03:02:26.674021 1661198 cri.go:89] found id: "34a04e8a9268354f0b56354ac57651328f516cc508f9fa0c077c3b4d4336b5ac"
	I1119 03:02:26.674024 1661198 cri.go:89] found id: "3803cdc1a2993683debdee19a3b01fb09e7c32a9d12eb84a9436d969662cea8a"
	I1119 03:02:26.674027 1661198 cri.go:89] found id: "1f1f933b7182604f83325a95fc3ff39e0799211227f9d528ab807a128acc0a96"
	I1119 03:02:26.674034 1661198 cri.go:89] found id: "0b0a1ea8af8bea488a5668e7548a35e9a9b3f133dd62d0b5eec95839647aa3a9"
	I1119 03:02:26.674038 1661198 cri.go:89] found id: "60ef01ce19f59b14a39c9d03bdda2fb6b702a2cd8a2bdca3dce9e879e6a33576"
	I1119 03:02:26.674041 1661198 cri.go:89] found id: ""
	I1119 03:02:26.674088 1661198 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 03:02:26.686725 1661198 retry.go:31] will retry after 350.191895ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:02:26Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:02:27.037251 1661198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:02:27.053685 1661198 pause.go:52] kubelet running: false
	I1119 03:02:27.053744 1661198 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 03:02:27.244436 1661198 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 03:02:27.244514 1661198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 03:02:27.318013 1661198 cri.go:89] found id: "dc2899265d6b0abb8bb121734cf5340b1c1e2eaaeacba61cd625f0fd5849b46a"
	I1119 03:02:27.318035 1661198 cri.go:89] found id: "cc12ce0d09f2f1fda420bf8fe3582af2e4d897fbce86ad179d3548f3c7dd46f7"
	I1119 03:02:27.318040 1661198 cri.go:89] found id: "36dc12556790ec62ebafc51adfeddf981db6efc365694b45844fc58332452d44"
	I1119 03:02:27.318044 1661198 cri.go:89] found id: "3ea7de269e8e6d7b9b64192a351808f1a03a33517868461ef84dc108d46883a5"
	I1119 03:02:27.318047 1661198 cri.go:89] found id: "717bbd5246f66b2cc923d8f5ba5038836144f1b64ec4fff2f37f5caf1afef446"
	I1119 03:02:27.318050 1661198 cri.go:89] found id: "4516831cebdb2595b82a89b6272a4678df2d23122cf9ae52b8b5ae44bd439756"
	I1119 03:02:27.318053 1661198 cri.go:89] found id: "34a04e8a9268354f0b56354ac57651328f516cc508f9fa0c077c3b4d4336b5ac"
	I1119 03:02:27.318056 1661198 cri.go:89] found id: "3803cdc1a2993683debdee19a3b01fb09e7c32a9d12eb84a9436d969662cea8a"
	I1119 03:02:27.318059 1661198 cri.go:89] found id: "1f1f933b7182604f83325a95fc3ff39e0799211227f9d528ab807a128acc0a96"
	I1119 03:02:27.318066 1661198 cri.go:89] found id: "0b0a1ea8af8bea488a5668e7548a35e9a9b3f133dd62d0b5eec95839647aa3a9"
	I1119 03:02:27.318069 1661198 cri.go:89] found id: "60ef01ce19f59b14a39c9d03bdda2fb6b702a2cd8a2bdca3dce9e879e6a33576"
	I1119 03:02:27.318072 1661198 cri.go:89] found id: ""
	I1119 03:02:27.318130 1661198 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 03:02:27.329381 1661198 retry.go:31] will retry after 246.718703ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:02:27Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:02:27.576704 1661198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:02:27.589607 1661198 pause.go:52] kubelet running: false
	I1119 03:02:27.589679 1661198 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 03:02:27.751521 1661198 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 03:02:27.751611 1661198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 03:02:27.830722 1661198 cri.go:89] found id: "dc2899265d6b0abb8bb121734cf5340b1c1e2eaaeacba61cd625f0fd5849b46a"
	I1119 03:02:27.830744 1661198 cri.go:89] found id: "cc12ce0d09f2f1fda420bf8fe3582af2e4d897fbce86ad179d3548f3c7dd46f7"
	I1119 03:02:27.830749 1661198 cri.go:89] found id: "36dc12556790ec62ebafc51adfeddf981db6efc365694b45844fc58332452d44"
	I1119 03:02:27.830753 1661198 cri.go:89] found id: "3ea7de269e8e6d7b9b64192a351808f1a03a33517868461ef84dc108d46883a5"
	I1119 03:02:27.830756 1661198 cri.go:89] found id: "717bbd5246f66b2cc923d8f5ba5038836144f1b64ec4fff2f37f5caf1afef446"
	I1119 03:02:27.830760 1661198 cri.go:89] found id: "4516831cebdb2595b82a89b6272a4678df2d23122cf9ae52b8b5ae44bd439756"
	I1119 03:02:27.830763 1661198 cri.go:89] found id: "34a04e8a9268354f0b56354ac57651328f516cc508f9fa0c077c3b4d4336b5ac"
	I1119 03:02:27.830765 1661198 cri.go:89] found id: "3803cdc1a2993683debdee19a3b01fb09e7c32a9d12eb84a9436d969662cea8a"
	I1119 03:02:27.830768 1661198 cri.go:89] found id: "1f1f933b7182604f83325a95fc3ff39e0799211227f9d528ab807a128acc0a96"
	I1119 03:02:27.830780 1661198 cri.go:89] found id: "0b0a1ea8af8bea488a5668e7548a35e9a9b3f133dd62d0b5eec95839647aa3a9"
	I1119 03:02:27.830783 1661198 cri.go:89] found id: "60ef01ce19f59b14a39c9d03bdda2fb6b702a2cd8a2bdca3dce9e879e6a33576"
	I1119 03:02:27.830786 1661198 cri.go:89] found id: ""
	I1119 03:02:27.830838 1661198 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 03:02:27.845495 1661198 out.go:203] 
	W1119 03:02:27.848575 1661198 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:02:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:02:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 03:02:27.848635 1661198 out.go:285] * 
	* 
	W1119 03:02:27.858651 1661198 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 03:02:27.861640 1661198 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-579203 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-579203
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-579203:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5",
	        "Created": "2025-11-19T02:59:35.831812475Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1656929,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T03:01:24.875954361Z",
	            "FinishedAt": "2025-11-19T03:01:24.078571417Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/hosts",
	        "LogPath": "/var/lib/docker/containers/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5-json.log",
	        "Name": "/default-k8s-diff-port-579203",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-579203:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-579203",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5",
	                "LowerDir": "/var/lib/docker/overlay2/d622a4d4992266276def27975e825f419a488b9d81d50dcaf7f9bc257af61d59-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d622a4d4992266276def27975e825f419a488b9d81d50dcaf7f9bc257af61d59/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d622a4d4992266276def27975e825f419a488b9d81d50dcaf7f9bc257af61d59/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d622a4d4992266276def27975e825f419a488b9d81d50dcaf7f9bc257af61d59/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-579203",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-579203/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-579203",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-579203",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-579203",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "816ad20de6afc90bc5c35d80205e8832dbe6086051bc3548b5f345292d7c6451",
	            "SandboxKey": "/var/run/docker/netns/816ad20de6af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34915"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34916"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34919"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34917"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34918"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-579203": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:c9:36:3a:6d:68",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8f7be654242a82c1a39285c06387290e9e449b11aff81f581eff53904d206cfb",
	                    "EndpointID": "852d362489c80720fccc4ed592bf50cc12bdb62196065f35866ad65cf3ebcf32",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-579203",
	                        "d6ecbc325578"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-579203 -n default-k8s-diff-port-579203
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-579203 -n default-k8s-diff-port-579203: exit status 2 (350.77079ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-579203 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-579203 logs -n 25: (1.335748127s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-702842 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-702842          │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ delete  │ -p cert-options-702842                                                                                                                                                                                                                        │ cert-options-702842          │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-525469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │                     │
	│ stop    │ -p old-k8s-version-525469 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-525469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:58 UTC │
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:59 UTC │
	│ image   │ old-k8s-version-525469 image list --format=json                                                                                                                                                                                               │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ pause   │ -p old-k8s-version-525469 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │                     │
	│ start   │ -p cert-expiration-422184 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ delete  │ -p cert-expiration-422184                                                                                                                                                                                                                     │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-579203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-579203 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-592123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p embed-certs-592123 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-579203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ addons  │ enable dashboard -p embed-certs-592123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ image   │ default-k8s-diff-port-579203 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p default-k8s-diff-port-579203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 03:01:31
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 03:01:31.694099 1658016 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:01:31.694285 1658016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:01:31.694312 1658016 out.go:374] Setting ErrFile to fd 2...
	I1119 03:01:31.694333 1658016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:01:31.694632 1658016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:01:31.695116 1658016 out.go:368] Setting JSON to false
	I1119 03:01:31.696038 1658016 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38619,"bootTime":1763482673,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 03:01:31.696425 1658016 start.go:143] virtualization:  
	I1119 03:01:31.700158 1658016 out.go:179] * [embed-certs-592123] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 03:01:31.704182 1658016 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 03:01:31.704392 1658016 notify.go:221] Checking for updates...
	I1119 03:01:31.710069 1658016 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 03:01:31.712926 1658016 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:01:31.715720 1658016 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 03:01:31.718664 1658016 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 03:01:31.721447 1658016 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 03:01:31.724877 1658016 config.go:182] Loaded profile config "embed-certs-592123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:01:31.725487 1658016 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 03:01:31.778202 1658016 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 03:01:31.778328 1658016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:01:31.876699 1658016 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 03:01:31.865000467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:01:31.876810 1658016 docker.go:319] overlay module found
	I1119 03:01:31.879893 1658016 out.go:179] * Using the docker driver based on existing profile
	I1119 03:01:31.882733 1658016 start.go:309] selected driver: docker
	I1119 03:01:31.882756 1658016 start.go:930] validating driver "docker" against &{Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:01:31.882852 1658016 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 03:01:31.883508 1658016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:01:31.999624 1658016 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 03:01:31.986051413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:01:31.999963 1658016 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:01:31.999988 1658016 cni.go:84] Creating CNI manager for ""
	I1119 03:01:32.000043 1658016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:01:32.000082 1658016 start.go:353] cluster config:
	{Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:01:32.003455 1658016 out.go:179] * Starting "embed-certs-592123" primary control-plane node in "embed-certs-592123" cluster
	I1119 03:01:32.006956 1658016 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 03:01:32.010136 1658016 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 03:01:32.013087 1658016 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:01:32.013131 1658016 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 03:01:32.013142 1658016 cache.go:65] Caching tarball of preloaded images
	I1119 03:01:32.013223 1658016 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 03:01:32.013232 1658016 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 03:01:32.013358 1658016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/config.json ...
	I1119 03:01:32.013613 1658016 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 03:01:32.043034 1658016 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 03:01:32.043057 1658016 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 03:01:32.043069 1658016 cache.go:243] Successfully downloaded all kic artifacts
	I1119 03:01:32.043094 1658016 start.go:360] acquireMachinesLock for embed-certs-592123: {Name:mkad274f419d3f3256db7dae28b742586dc2ebd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:01:32.043146 1658016 start.go:364] duration metric: took 35.084µs to acquireMachinesLock for "embed-certs-592123"
	I1119 03:01:32.043166 1658016 start.go:96] Skipping create...Using existing machine configuration
	I1119 03:01:32.043171 1658016 fix.go:54] fixHost starting: 
	I1119 03:01:32.043430 1658016 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:01:32.073987 1658016 fix.go:112] recreateIfNeeded on embed-certs-592123: state=Stopped err=<nil>
	W1119 03:01:32.074014 1658016 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 03:01:31.382636 1656802 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-579203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:01:31.408700 1656802 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 03:01:31.418901 1656802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:01:31.429751 1656802 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-579203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-579203 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 03:01:31.429874 1656802 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:01:31.429932 1656802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:01:31.486743 1656802 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:01:31.486766 1656802 crio.go:433] Images already preloaded, skipping extraction
	I1119 03:01:31.486818 1656802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:01:31.524977 1656802 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:01:31.525001 1656802 cache_images.go:86] Images are preloaded, skipping loading
	I1119 03:01:31.525009 1656802 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1119 03:01:31.525101 1656802 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-579203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-579203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 03:01:31.525185 1656802 ssh_runner.go:195] Run: crio config
	I1119 03:01:31.608065 1656802 cni.go:84] Creating CNI manager for ""
	I1119 03:01:31.608085 1656802 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:01:31.608109 1656802 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 03:01:31.608132 1656802 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-579203 NodeName:default-k8s-diff-port-579203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 03:01:31.608267 1656802 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-579203"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 03:01:31.608333 1656802 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 03:01:31.616713 1656802 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 03:01:31.616796 1656802 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 03:01:31.625905 1656802 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 03:01:31.640197 1656802 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 03:01:31.654632 1656802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1119 03:01:31.669129 1656802 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 03:01:31.673150 1656802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:01:31.684012 1656802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:01:31.842706 1656802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:01:31.862589 1656802 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203 for IP: 192.168.85.2
	I1119 03:01:31.862611 1656802 certs.go:195] generating shared ca certs ...
	I1119 03:01:31.862626 1656802 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:31.862778 1656802 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 03:01:31.862824 1656802 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 03:01:31.862834 1656802 certs.go:257] generating profile certs ...
	I1119 03:01:31.862921 1656802 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.key
	I1119 03:01:31.863016 1656802 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key.1f3db3c7
	I1119 03:01:31.863059 1656802 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.key
	I1119 03:01:31.863172 1656802 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 03:01:31.863209 1656802 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 03:01:31.863219 1656802 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 03:01:31.863244 1656802 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 03:01:31.863266 1656802 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 03:01:31.863287 1656802 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 03:01:31.863333 1656802 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:01:31.863893 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 03:01:31.919072 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 03:01:31.961266 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 03:01:32.004872 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 03:01:32.062731 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 03:01:32.113061 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 03:01:32.152998 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 03:01:32.175738 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 03:01:32.211491 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 03:01:32.240771 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 03:01:32.291001 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 03:01:32.312199 1656802 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 03:01:32.328177 1656802 ssh_runner.go:195] Run: openssl version
	I1119 03:01:32.335811 1656802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 03:01:32.347836 1656802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:01:32.352606 1656802 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:01:32.352723 1656802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:01:32.399906 1656802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 03:01:32.418016 1656802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 03:01:32.431940 1656802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 03:01:32.436366 1656802 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 03:01:32.436425 1656802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 03:01:32.525416 1656802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 03:01:32.534432 1656802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 03:01:32.546942 1656802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 03:01:32.552347 1656802 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 03:01:32.552406 1656802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 03:01:32.654831 1656802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 03:01:32.679397 1656802 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 03:01:32.693377 1656802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 03:01:32.808831 1656802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 03:01:32.877180 1656802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 03:01:33.051675 1656802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 03:01:33.127180 1656802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 03:01:33.200498 1656802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 03:01:33.262031 1656802 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-579203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-579203 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:01:33.262163 1656802 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 03:01:33.262251 1656802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 03:01:33.312927 1656802 cri.go:89] found id: "4516831cebdb2595b82a89b6272a4678df2d23122cf9ae52b8b5ae44bd439756"
	I1119 03:01:33.312997 1656802 cri.go:89] found id: "34a04e8a9268354f0b56354ac57651328f516cc508f9fa0c077c3b4d4336b5ac"
	I1119 03:01:33.313027 1656802 cri.go:89] found id: "3803cdc1a2993683debdee19a3b01fb09e7c32a9d12eb84a9436d969662cea8a"
	I1119 03:01:33.313044 1656802 cri.go:89] found id: "1f1f933b7182604f83325a95fc3ff39e0799211227f9d528ab807a128acc0a96"
	I1119 03:01:33.313062 1656802 cri.go:89] found id: ""
	I1119 03:01:33.313127 1656802 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 03:01:33.327507 1656802 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:01:33Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:01:33.327640 1656802 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 03:01:33.340875 1656802 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 03:01:33.340932 1656802 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 03:01:33.340994 1656802 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 03:01:33.353524 1656802 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 03:01:33.353973 1656802 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-579203" does not appear in /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:01:33.354133 1656802 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-1463525/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-579203" cluster setting kubeconfig missing "default-k8s-diff-port-579203" context setting]
	I1119 03:01:33.354443 1656802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:33.355806 1656802 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 03:01:33.366501 1656802 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 03:01:33.366570 1656802 kubeadm.go:602] duration metric: took 25.618557ms to restartPrimaryControlPlane
	I1119 03:01:33.366594 1656802 kubeadm.go:403] duration metric: took 104.5714ms to StartCluster
	I1119 03:01:33.366623 1656802 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:33.366710 1656802 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:01:33.367353 1656802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:33.367575 1656802 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:01:33.367877 1656802 config.go:182] Loaded profile config "default-k8s-diff-port-579203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:01:33.367951 1656802 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:01:33.368075 1656802 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-579203"
	I1119 03:01:33.368105 1656802 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-579203"
	W1119 03:01:33.368131 1656802 addons.go:248] addon storage-provisioner should already be in state true
	I1119 03:01:33.368165 1656802 host.go:66] Checking if "default-k8s-diff-port-579203" exists ...
	I1119 03:01:33.368781 1656802 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:01:33.368945 1656802 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-579203"
	I1119 03:01:33.368980 1656802 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-579203"
	W1119 03:01:33.369000 1656802 addons.go:248] addon dashboard should already be in state true
	I1119 03:01:33.369036 1656802 host.go:66] Checking if "default-k8s-diff-port-579203" exists ...
	I1119 03:01:33.369232 1656802 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-579203"
	I1119 03:01:33.369256 1656802 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-579203"
	I1119 03:01:33.369481 1656802 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:01:33.369570 1656802 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:01:33.377538 1656802 out.go:179] * Verifying Kubernetes components...
	I1119 03:01:33.380901 1656802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:01:33.428249 1656802 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 03:01:33.430633 1656802 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-579203"
	W1119 03:01:33.430652 1656802 addons.go:248] addon default-storageclass should already be in state true
	I1119 03:01:33.430675 1656802 host.go:66] Checking if "default-k8s-diff-port-579203" exists ...
	I1119 03:01:33.431093 1656802 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:01:33.431241 1656802 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:01:33.436527 1656802 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 03:01:33.436640 1656802 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:01:33.436650 1656802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:01:33.436709 1656802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 03:01:33.439380 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 03:01:33.439406 1656802 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 03:01:33.439467 1656802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 03:01:33.482617 1656802 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:01:33.482637 1656802 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:01:33.482834 1656802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 03:01:33.495291 1656802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34915 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 03:01:33.501482 1656802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34915 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 03:01:33.523106 1656802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34915 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 03:01:33.726999 1656802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:01:33.732353 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 03:01:33.732379 1656802 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 03:01:33.743826 1656802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:01:33.754537 1656802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:01:33.786983 1656802 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-579203" to be "Ready" ...
	I1119 03:01:33.795696 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 03:01:33.795716 1656802 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 03:01:33.841748 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 03:01:33.841773 1656802 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 03:01:33.902688 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 03:01:33.902718 1656802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 03:01:33.991741 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 03:01:33.991766 1656802 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 03:01:34.135318 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 03:01:34.135341 1656802 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 03:01:34.156785 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 03:01:34.156810 1656802 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 03:01:34.201589 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 03:01:34.201614 1656802 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 03:01:34.228872 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:01:34.228897 1656802 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 03:01:34.265369 1656802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:01:32.077538 1658016 out.go:252] * Restarting existing docker container for "embed-certs-592123" ...
	I1119 03:01:32.077619 1658016 cli_runner.go:164] Run: docker start embed-certs-592123
	I1119 03:01:32.422612 1658016 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:01:32.449537 1658016 kic.go:430] container "embed-certs-592123" state is running.
	I1119 03:01:32.449915 1658016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-592123
	I1119 03:01:32.480682 1658016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/config.json ...
	I1119 03:01:32.480950 1658016 machine.go:94] provisionDockerMachine start ...
	I1119 03:01:32.481018 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:32.509145 1658016 main.go:143] libmachine: Using SSH client type: native
	I1119 03:01:32.509977 1658016 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34920 <nil> <nil>}
	I1119 03:01:32.510001 1658016 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 03:01:32.510759 1658016 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 03:01:35.689449 1658016 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-592123
	
	I1119 03:01:35.689469 1658016 ubuntu.go:182] provisioning hostname "embed-certs-592123"
	I1119 03:01:35.689545 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:35.716862 1658016 main.go:143] libmachine: Using SSH client type: native
	I1119 03:01:35.717166 1658016 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34920 <nil> <nil>}
	I1119 03:01:35.717177 1658016 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-592123 && echo "embed-certs-592123" | sudo tee /etc/hostname
	I1119 03:01:35.916932 1658016 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-592123
	
	I1119 03:01:35.917083 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:35.939266 1658016 main.go:143] libmachine: Using SSH client type: native
	I1119 03:01:35.939572 1658016 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34920 <nil> <nil>}
	I1119 03:01:35.939676 1658016 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-592123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-592123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-592123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 03:01:36.113904 1658016 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 03:01:36.113929 1658016 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 03:01:36.113990 1658016 ubuntu.go:190] setting up certificates
	I1119 03:01:36.114001 1658016 provision.go:84] configureAuth start
	I1119 03:01:36.114075 1658016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-592123
	I1119 03:01:36.145721 1658016 provision.go:143] copyHostCerts
	I1119 03:01:36.145794 1658016 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 03:01:36.145816 1658016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 03:01:36.145903 1658016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 03:01:36.146003 1658016 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 03:01:36.146015 1658016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 03:01:36.146042 1658016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 03:01:36.146101 1658016 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 03:01:36.146111 1658016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 03:01:36.146146 1658016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 03:01:36.146200 1658016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.embed-certs-592123 san=[127.0.0.1 192.168.76.2 embed-certs-592123 localhost minikube]
	I1119 03:01:38.867365 1656802 node_ready.go:49] node "default-k8s-diff-port-579203" is "Ready"
	I1119 03:01:38.867396 1656802 node_ready.go:38] duration metric: took 5.080374221s for node "default-k8s-diff-port-579203" to be "Ready" ...
	I1119 03:01:38.867410 1656802 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:01:38.867468 1656802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:01:39.376543 1656802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.63267581s)
	I1119 03:01:37.481002 1658016 provision.go:177] copyRemoteCerts
	I1119 03:01:37.481123 1658016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 03:01:37.481187 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:37.498613 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:37.622071 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 03:01:37.659368 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 03:01:37.683985 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 03:01:37.711470 1658016 provision.go:87] duration metric: took 1.597442922s to configureAuth
	I1119 03:01:37.711500 1658016 ubuntu.go:206] setting minikube options for container-runtime
	I1119 03:01:37.711743 1658016 config.go:182] Loaded profile config "embed-certs-592123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:01:37.711889 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:37.734957 1658016 main.go:143] libmachine: Using SSH client type: native
	I1119 03:01:37.735283 1658016 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34920 <nil> <nil>}
	I1119 03:01:37.735301 1658016 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 03:01:38.285895 1658016 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 03:01:38.285922 1658016 machine.go:97] duration metric: took 5.804954413s to provisionDockerMachine
	I1119 03:01:38.285933 1658016 start.go:293] postStartSetup for "embed-certs-592123" (driver="docker")
	I1119 03:01:38.285967 1658016 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 03:01:38.286049 1658016 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 03:01:38.286112 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:38.312919 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:38.443748 1658016 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 03:01:38.447660 1658016 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 03:01:38.447689 1658016 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 03:01:38.447700 1658016 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 03:01:38.447753 1658016 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 03:01:38.447832 1658016 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 03:01:38.447940 1658016 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 03:01:38.461273 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:01:38.510174 1658016 start.go:296] duration metric: took 224.22474ms for postStartSetup
	I1119 03:01:38.510274 1658016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 03:01:38.510320 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:38.539011 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:38.663137 1658016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 03:01:38.674018 1658016 fix.go:56] duration metric: took 6.630839224s for fixHost
	I1119 03:01:38.674045 1658016 start.go:83] releasing machines lock for "embed-certs-592123", held for 6.630889873s
	I1119 03:01:38.674129 1658016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-592123
	I1119 03:01:38.702738 1658016 ssh_runner.go:195] Run: cat /version.json
	I1119 03:01:38.702792 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:38.703037 1658016 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 03:01:38.703099 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:38.735230 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:38.749542 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:38.891506 1658016 ssh_runner.go:195] Run: systemctl --version
	I1119 03:01:39.029294 1658016 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 03:01:39.115959 1658016 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 03:01:39.123862 1658016 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 03:01:39.123949 1658016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 03:01:39.140196 1658016 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 03:01:39.140222 1658016 start.go:496] detecting cgroup driver to use...
	I1119 03:01:39.140257 1658016 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 03:01:39.140335 1658016 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 03:01:39.168991 1658016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 03:01:39.192845 1658016 docker.go:218] disabling cri-docker service (if available) ...
	I1119 03:01:39.192949 1658016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 03:01:39.214043 1658016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 03:01:39.240416 1658016 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 03:01:39.443354 1658016 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 03:01:39.716017 1658016 docker.go:234] disabling docker service ...
	I1119 03:01:39.716110 1658016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 03:01:39.747581 1658016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 03:01:39.774942 1658016 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 03:01:40.008458 1658016 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 03:01:40.213317 1658016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 03:01:40.238337 1658016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 03:01:40.268239 1658016 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 03:01:40.268353 1658016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.278269 1658016 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 03:01:40.278391 1658016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.294015 1658016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.303202 1658016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.318174 1658016 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 03:01:40.328359 1658016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.344557 1658016 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.353087 1658016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.365543 1658016 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 03:01:40.383809 1658016 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 03:01:40.396307 1658016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:01:40.592886 1658016 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 03:01:40.826288 1658016 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 03:01:40.826370 1658016 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 03:01:40.832512 1658016 start.go:564] Will wait 60s for crictl version
	I1119 03:01:40.832625 1658016 ssh_runner.go:195] Run: which crictl
	I1119 03:01:40.838189 1658016 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 03:01:40.876669 1658016 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 03:01:40.876764 1658016 ssh_runner.go:195] Run: crio --version
	I1119 03:01:40.948048 1658016 ssh_runner.go:195] Run: crio --version
	I1119 03:01:40.996697 1658016 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 03:01:41.430835 1656802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.676261904s)
	I1119 03:01:41.661740 1656802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.396328561s)
	I1119 03:01:41.661898 1656802 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.794418905s)
	I1119 03:01:41.661913 1656802 api_server.go:72] duration metric: took 8.294288111s to wait for apiserver process to appear ...
	I1119 03:01:41.661919 1656802 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:01:41.661935 1656802 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1119 03:01:41.664921 1656802 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-579203 addons enable metrics-server
	
	I1119 03:01:41.667834 1656802 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1119 03:01:40.999730 1658016 cli_runner.go:164] Run: docker network inspect embed-certs-592123 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:01:41.022357 1658016 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 03:01:41.026453 1658016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:01:41.038517 1658016 kubeadm.go:884] updating cluster {Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 03:01:41.038634 1658016 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:01:41.038710 1658016 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:01:41.092718 1658016 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:01:41.092746 1658016 crio.go:433] Images already preloaded, skipping extraction
	I1119 03:01:41.092805 1658016 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:01:41.148452 1658016 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:01:41.148473 1658016 cache_images.go:86] Images are preloaded, skipping loading
	I1119 03:01:41.148481 1658016 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 03:01:41.148578 1658016 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-592123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 03:01:41.148659 1658016 ssh_runner.go:195] Run: crio config
	I1119 03:01:41.262176 1658016 cni.go:84] Creating CNI manager for ""
	I1119 03:01:41.262209 1658016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:01:41.262232 1658016 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 03:01:41.262256 1658016 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-592123 NodeName:embed-certs-592123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 03:01:41.262400 1658016 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-592123"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 03:01:41.262492 1658016 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 03:01:41.271122 1658016 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 03:01:41.271212 1658016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 03:01:41.285247 1658016 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 03:01:41.315966 1658016 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 03:01:41.335409 1658016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1119 03:01:41.353268 1658016 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 03:01:41.359773 1658016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:01:41.372085 1658016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:01:41.573466 1658016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:01:41.620128 1658016 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123 for IP: 192.168.76.2
	I1119 03:01:41.620157 1658016 certs.go:195] generating shared ca certs ...
	I1119 03:01:41.620173 1658016 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:41.620344 1658016 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 03:01:41.620398 1658016 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 03:01:41.620409 1658016 certs.go:257] generating profile certs ...
	I1119 03:01:41.620523 1658016 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.key
	I1119 03:01:41.620596 1658016 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key.9c644e00
	I1119 03:01:41.620640 1658016 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.key
	I1119 03:01:41.620774 1658016 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 03:01:41.620810 1658016 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 03:01:41.620830 1658016 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 03:01:41.620861 1658016 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 03:01:41.620890 1658016 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 03:01:41.620922 1658016 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 03:01:41.620969 1658016 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:01:41.621663 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 03:01:41.666747 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 03:01:41.706670 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 03:01:41.735378 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 03:01:41.789349 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 03:01:41.826329 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 03:01:41.869206 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 03:01:41.913013 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 03:01:41.969269 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 03:01:42.004775 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 03:01:42.031638 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 03:01:42.055302 1658016 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 03:01:42.074752 1658016 ssh_runner.go:195] Run: openssl version
	I1119 03:01:42.083886 1658016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 03:01:42.100589 1658016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:01:42.109790 1658016 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:01:42.109932 1658016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:01:42.164052 1658016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 03:01:42.174595 1658016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 03:01:42.186485 1658016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 03:01:42.192616 1658016 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 03:01:42.192830 1658016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 03:01:42.243513 1658016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 03:01:42.254498 1658016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 03:01:42.267812 1658016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 03:01:42.273902 1658016 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 03:01:42.274057 1658016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 03:01:42.329694 1658016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 03:01:42.339413 1658016 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 03:01:42.345031 1658016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 03:01:42.390953 1658016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 03:01:42.496916 1658016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 03:01:42.617590 1658016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 03:01:42.702909 1658016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 03:01:42.819439 1658016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 03:01:42.930523 1658016 kubeadm.go:401] StartCluster: {Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:01:42.930662 1658016 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 03:01:42.930769 1658016 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 03:01:43.018406 1658016 cri.go:89] found id: "28baf9cda670ab54ffce2ff7181d4841299d3d55c51eab8df2a52c1c366a4111"
	I1119 03:01:43.018465 1658016 cri.go:89] found id: "44051fa115dbdefd2547da0097f35a9d487cbcc9b4becc2a70f91a77a0d1da21"
	I1119 03:01:43.018493 1658016 cri.go:89] found id: "0c30389a4661b622b8e4e66ed3373832cf9f4abe199dc1ec782692aa5b76a699"
	I1119 03:01:43.018512 1658016 cri.go:89] found id: "50a2bdb9c67513a1526c7008d09101b3db95d7bac468c5e2f2f7dcda041de7b5"
	I1119 03:01:43.018538 1658016 cri.go:89] found id: ""
	I1119 03:01:43.018613 1658016 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 03:01:43.050006 1658016 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:01:43Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:01:43.050138 1658016 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 03:01:43.068372 1658016 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 03:01:43.068442 1658016 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 03:01:43.068517 1658016 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 03:01:43.086612 1658016 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 03:01:43.087281 1658016 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-592123" does not appear in /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:01:43.087609 1658016 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-1463525/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-592123" cluster setting kubeconfig missing "embed-certs-592123" context setting]
	I1119 03:01:43.088162 1658016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:43.089916 1658016 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 03:01:43.106949 1658016 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1119 03:01:43.107033 1658016 kubeadm.go:602] duration metric: took 38.571245ms to restartPrimaryControlPlane
	I1119 03:01:43.107058 1658016 kubeadm.go:403] duration metric: took 176.542666ms to StartCluster
	I1119 03:01:43.107087 1658016 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:43.107190 1658016 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:01:43.108563 1658016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:43.108856 1658016 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:01:43.109391 1658016 config.go:182] Loaded profile config "embed-certs-592123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:01:43.109384 1658016 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:01:43.109466 1658016 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-592123"
	I1119 03:01:43.109473 1658016 addons.go:70] Setting dashboard=true in profile "embed-certs-592123"
	I1119 03:01:43.109480 1658016 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-592123"
	W1119 03:01:43.109487 1658016 addons.go:248] addon storage-provisioner should already be in state true
	I1119 03:01:43.109488 1658016 addons.go:239] Setting addon dashboard=true in "embed-certs-592123"
	W1119 03:01:43.109494 1658016 addons.go:248] addon dashboard should already be in state true
	I1119 03:01:43.109570 1658016 host.go:66] Checking if "embed-certs-592123" exists ...
	I1119 03:01:43.109627 1658016 host.go:66] Checking if "embed-certs-592123" exists ...
	I1119 03:01:43.110054 1658016 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:01:43.110073 1658016 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:01:43.112556 1658016 addons.go:70] Setting default-storageclass=true in profile "embed-certs-592123"
	I1119 03:01:43.112588 1658016 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-592123"
	I1119 03:01:43.113501 1658016 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:01:43.151994 1658016 out.go:179] * Verifying Kubernetes components...
	I1119 03:01:43.157749 1658016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:01:43.157971 1658016 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 03:01:43.161116 1658016 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 03:01:43.173620 1658016 addons.go:239] Setting addon default-storageclass=true in "embed-certs-592123"
	W1119 03:01:43.173648 1658016 addons.go:248] addon default-storageclass should already be in state true
	I1119 03:01:43.173673 1658016 host.go:66] Checking if "embed-certs-592123" exists ...
	I1119 03:01:43.174122 1658016 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:01:43.175058 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 03:01:43.175081 1658016 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 03:01:43.175143 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:43.175260 1658016 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:01:41.670724 1656802 addons.go:515] duration metric: took 8.302736772s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1119 03:01:41.682414 1656802 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1119 03:01:41.684228 1656802 api_server.go:141] control plane version: v1.34.1
	I1119 03:01:41.684252 1656802 api_server.go:131] duration metric: took 22.327558ms to wait for apiserver health ...
	I1119 03:01:41.684261 1656802 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:01:41.722659 1656802 system_pods.go:59] 8 kube-system pods found
	I1119 03:01:41.722703 1656802 system_pods.go:61] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:41.722713 1656802 system_pods.go:61] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:01:41.722719 1656802 system_pods.go:61] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:01:41.722726 1656802 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 03:01:41.722732 1656802 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 03:01:41.722738 1656802 system_pods.go:61] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:01:41.722745 1656802 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:01:41.722758 1656802 system_pods.go:61] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:01:41.722766 1656802 system_pods.go:74] duration metric: took 38.49864ms to wait for pod list to return data ...
	I1119 03:01:41.722775 1656802 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:01:41.739110 1656802 default_sa.go:45] found service account: "default"
	I1119 03:01:41.739131 1656802 default_sa.go:55] duration metric: took 16.349743ms for default service account to be created ...
	I1119 03:01:41.739194 1656802 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 03:01:41.743937 1656802 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:41.744018 1656802 system_pods.go:89] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:41.744045 1656802 system_pods.go:89] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:01:41.744066 1656802 system_pods.go:89] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:01:41.744110 1656802 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 03:01:41.744131 1656802 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 03:01:41.744167 1656802 system_pods.go:89] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:01:41.744193 1656802 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:01:41.744212 1656802 system_pods.go:89] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Running
	I1119 03:01:41.744249 1656802 system_pods.go:126] duration metric: took 5.048931ms to wait for k8s-apps to be running ...
	I1119 03:01:41.744275 1656802 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 03:01:41.744359 1656802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:01:41.768797 1656802 system_svc.go:56] duration metric: took 24.51344ms WaitForService to wait for kubelet
	I1119 03:01:41.768872 1656802 kubeadm.go:587] duration metric: took 8.401245822s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:01:41.768909 1656802 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:01:41.774044 1656802 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:01:41.774127 1656802 node_conditions.go:123] node cpu capacity is 2
	I1119 03:01:41.774154 1656802 node_conditions.go:105] duration metric: took 5.225729ms to run NodePressure ...
	I1119 03:01:41.774178 1656802 start.go:242] waiting for startup goroutines ...
	I1119 03:01:41.774213 1656802 start.go:247] waiting for cluster config update ...
	I1119 03:01:41.774243 1656802 start.go:256] writing updated cluster config ...
	I1119 03:01:41.774598 1656802 ssh_runner.go:195] Run: rm -f paused
	I1119 03:01:41.780165 1656802 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:01:41.783949 1656802 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pkngt" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 03:01:43.790163 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	I1119 03:01:43.179717 1658016 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:01:43.179746 1658016 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:01:43.179813 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:43.219196 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:43.233795 1658016 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:01:43.233815 1658016 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:01:43.233888 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:43.235409 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:43.263024 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:43.443517 1658016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:01:43.497238 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 03:01:43.497304 1658016 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 03:01:43.537813 1658016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:01:43.566248 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 03:01:43.566313 1658016 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 03:01:43.580461 1658016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:01:43.616642 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 03:01:43.616707 1658016 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 03:01:43.696205 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 03:01:43.696269 1658016 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 03:01:43.758248 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 03:01:43.758321 1658016 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 03:01:43.803898 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 03:01:43.803973 1658016 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 03:01:43.857915 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 03:01:43.857988 1658016 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 03:01:43.887401 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 03:01:43.887473 1658016 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 03:01:43.919156 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:01:43.919230 1658016 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 03:01:43.955362 1658016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1119 03:01:45.792410 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:01:47.793974 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	I1119 03:01:53.061494 1658016 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.617935153s)
	I1119 03:01:53.061575 1658016 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.523699557s)
	I1119 03:01:53.061609 1658016 node_ready.go:35] waiting up to 6m0s for node "embed-certs-592123" to be "Ready" ...
	I1119 03:01:53.061903 1658016 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.481378979s)
	I1119 03:01:53.062149 1658016 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.106708624s)
	I1119 03:01:53.065575 1658016 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-592123 addons enable metrics-server
	
	I1119 03:01:53.109051 1658016 node_ready.go:49] node "embed-certs-592123" is "Ready"
	I1119 03:01:53.109130 1658016 node_ready.go:38] duration metric: took 47.507974ms for node "embed-certs-592123" to be "Ready" ...
	I1119 03:01:53.109158 1658016 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:01:53.109245 1658016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:01:53.143045 1658016 api_server.go:72] duration metric: took 10.034041016s to wait for apiserver process to appear ...
	I1119 03:01:53.143073 1658016 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:01:53.143092 1658016 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 03:01:53.150888 1658016 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1119 03:01:50.294379 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:01:52.803744 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	I1119 03:01:53.153808 1658016 addons.go:515] duration metric: took 10.044421191s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 03:01:53.173080 1658016 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 03:01:53.173106 1658016 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 03:01:53.643397 1658016 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 03:01:53.654965 1658016 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 03:01:53.656168 1658016 api_server.go:141] control plane version: v1.34.1
	I1119 03:01:53.656188 1658016 api_server.go:131] duration metric: took 513.10786ms to wait for apiserver health ...
	I1119 03:01:53.656197 1658016 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:01:53.660021 1658016 system_pods.go:59] 8 kube-system pods found
	I1119 03:01:53.660059 1658016 system_pods.go:61] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:53.660078 1658016 system_pods.go:61] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:01:53.660085 1658016 system_pods.go:61] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:53.660090 1658016 system_pods.go:61] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:53.660105 1658016 system_pods.go:61] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 03:01:53.660117 1658016 system_pods.go:61] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:53.660123 1658016 system_pods.go:61] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:01:53.660128 1658016 system_pods.go:61] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Running
	I1119 03:01:53.660139 1658016 system_pods.go:74] duration metric: took 3.935961ms to wait for pod list to return data ...
	I1119 03:01:53.660147 1658016 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:01:53.663078 1658016 default_sa.go:45] found service account: "default"
	I1119 03:01:53.663104 1658016 default_sa.go:55] duration metric: took 2.951424ms for default service account to be created ...
	I1119 03:01:53.663113 1658016 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 03:01:53.666546 1658016 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:53.666578 1658016 system_pods.go:89] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:53.666587 1658016 system_pods.go:89] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:01:53.666592 1658016 system_pods.go:89] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:53.666597 1658016 system_pods.go:89] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:53.666604 1658016 system_pods.go:89] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 03:01:53.666612 1658016 system_pods.go:89] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:53.666619 1658016 system_pods.go:89] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:01:53.666630 1658016 system_pods.go:89] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Running
	I1119 03:01:53.666638 1658016 system_pods.go:126] duration metric: took 3.519218ms to wait for k8s-apps to be running ...
	I1119 03:01:53.666652 1658016 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 03:01:53.666716 1658016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:01:53.697982 1658016 system_svc.go:56] duration metric: took 31.319686ms WaitForService to wait for kubelet
	I1119 03:01:53.698012 1658016 kubeadm.go:587] duration metric: took 10.589014492s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:01:53.698030 1658016 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:01:53.703692 1658016 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:01:53.703727 1658016 node_conditions.go:123] node cpu capacity is 2
	I1119 03:01:53.703743 1658016 node_conditions.go:105] duration metric: took 5.704961ms to run NodePressure ...
	I1119 03:01:53.703757 1658016 start.go:242] waiting for startup goroutines ...
	I1119 03:01:53.703764 1658016 start.go:247] waiting for cluster config update ...
	I1119 03:01:53.703778 1658016 start.go:256] writing updated cluster config ...
	I1119 03:01:53.704062 1658016 ssh_runner.go:195] Run: rm -f paused
	I1119 03:01:53.708700 1658016 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:01:53.717445 1658016 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vtc44" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 03:01:55.726256 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:01:55.288922 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:01:57.292291 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:01:59.292945 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:01:58.223419 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:00.278475 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:01.790375 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:02:04.289904 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:02:02.723334 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:05.224627 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:06.793192 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:02:09.289049 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:02:07.233415 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:09.723045 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:11.290055 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	I1119 03:02:12.790427 1656802 pod_ready.go:94] pod "coredns-66bc5c9577-pkngt" is "Ready"
	I1119 03:02:12.790507 1656802 pod_ready.go:86] duration metric: took 31.006488894s for pod "coredns-66bc5c9577-pkngt" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:12.793312 1656802 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:12.797770 1656802 pod_ready.go:94] pod "etcd-default-k8s-diff-port-579203" is "Ready"
	I1119 03:02:12.797796 1656802 pod_ready.go:86] duration metric: took 4.458802ms for pod "etcd-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:12.800142 1656802 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:12.804674 1656802 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-579203" is "Ready"
	I1119 03:02:12.804715 1656802 pod_ready.go:86] duration metric: took 4.550434ms for pod "kube-apiserver-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:12.807016 1656802 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:12.988477 1656802 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-579203" is "Ready"
	I1119 03:02:12.988513 1656802 pod_ready.go:86] duration metric: took 181.4741ms for pod "kube-controller-manager-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:13.188436 1656802 pod_ready.go:83] waiting for pod "kube-proxy-7ncfq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:13.588469 1656802 pod_ready.go:94] pod "kube-proxy-7ncfq" is "Ready"
	I1119 03:02:13.588497 1656802 pod_ready.go:86] duration metric: took 400.032955ms for pod "kube-proxy-7ncfq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:13.788515 1656802 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:14.188702 1656802 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-579203" is "Ready"
	I1119 03:02:14.188782 1656802 pod_ready.go:86] duration metric: took 400.239275ms for pod "kube-scheduler-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:14.188820 1656802 pod_ready.go:40] duration metric: took 32.40858096s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:02:14.253571 1656802 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:02:14.256753 1656802 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-579203" cluster and "default" namespace by default
	W1119 03:02:12.224015 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:14.723585 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:17.223113 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:19.223468 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:21.722580 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:24.222733 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	I1119 03:02:25.223317 1658016 pod_ready.go:94] pod "coredns-66bc5c9577-vtc44" is "Ready"
	I1119 03:02:25.223345 1658016 pod_ready.go:86] duration metric: took 31.505871824s for pod "coredns-66bc5c9577-vtc44" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.226268 1658016 pod_ready.go:83] waiting for pod "etcd-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.230852 1658016 pod_ready.go:94] pod "etcd-embed-certs-592123" is "Ready"
	I1119 03:02:25.230882 1658016 pod_ready.go:86] duration metric: took 4.588546ms for pod "etcd-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.232932 1658016 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.237209 1658016 pod_ready.go:94] pod "kube-apiserver-embed-certs-592123" is "Ready"
	I1119 03:02:25.237237 1658016 pod_ready.go:86] duration metric: took 4.279468ms for pod "kube-apiserver-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.239472 1658016 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.420523 1658016 pod_ready.go:94] pod "kube-controller-manager-embed-certs-592123" is "Ready"
	I1119 03:02:25.420555 1658016 pod_ready.go:86] duration metric: took 181.058406ms for pod "kube-controller-manager-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.620523 1658016 pod_ready.go:83] waiting for pod "kube-proxy-55pcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:26.020653 1658016 pod_ready.go:94] pod "kube-proxy-55pcf" is "Ready"
	I1119 03:02:26.020686 1658016 pod_ready.go:86] duration metric: took 400.085735ms for pod "kube-proxy-55pcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:26.220857 1658016 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:26.620682 1658016 pod_ready.go:94] pod "kube-scheduler-embed-certs-592123" is "Ready"
	I1119 03:02:26.620708 1658016 pod_ready.go:86] duration metric: took 399.828135ms for pod "kube-scheduler-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:26.620721 1658016 pod_ready.go:40] duration metric: took 32.911988432s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:02:26.700063 1658016 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:02:26.702951 1658016 out.go:179] * Done! kubectl is now configured to use "embed-certs-592123" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.759383975Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2debabb9-a8a0-4b47-8a76-cc52393d25d9 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.761498376Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=db1f30e8-c83d-4621-aa93-c6914ac0d1db name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.761647984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.766319006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.766489618Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/aa7c0ab4d079cebd26944c1c9e516c10e8dcc2744ad452b0cd56814f74ae1daa/merged/etc/passwd: no such file or directory"
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.766511656Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/aa7c0ab4d079cebd26944c1c9e516c10e8dcc2744ad452b0cd56814f74ae1daa/merged/etc/group: no such file or directory"
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.766775698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.792562874Z" level=info msg="Created container dc2899265d6b0abb8bb121734cf5340b1c1e2eaaeacba61cd625f0fd5849b46a: kube-system/storage-provisioner/storage-provisioner" id=db1f30e8-c83d-4621-aa93-c6914ac0d1db name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.793788562Z" level=info msg="Starting container: dc2899265d6b0abb8bb121734cf5340b1c1e2eaaeacba61cd625f0fd5849b46a" id=9d6a693d-9217-41ee-b94f-345cb4b36715 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.795897736Z" level=info msg="Started container" PID=1638 containerID=dc2899265d6b0abb8bb121734cf5340b1c1e2eaaeacba61cd625f0fd5849b46a description=kube-system/storage-provisioner/storage-provisioner id=9d6a693d-9217-41ee-b94f-345cb4b36715 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8c3902df90b2da5835f0282101762661e41a1a9efecfed6306176699b6b59b8
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.975762625Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.983586188Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.983622568Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.983643835Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.986734414Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.986765059Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.986787425Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.989780086Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.989811281Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.989833853Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.992733979Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.992766101Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.992790995Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.996511963Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.996544898Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	dc2899265d6b0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago      Running             storage-provisioner         2                   e8c3902df90b2       storage-provisioner                                    kube-system
	0b0a1ea8af8be       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   d9d09765bae39       dashboard-metrics-scraper-6ffb444bf9-57qxx             kubernetes-dashboard
	60ef01ce19f59       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   28 seconds ago      Running             kubernetes-dashboard        0                   6072879a2f315       kubernetes-dashboard-855c9754f9-7sz62                  kubernetes-dashboard
	cc12ce0d09f2f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           48 seconds ago      Running             kindnet-cni                 1                   9aa550c464849       kindnet-bt849                                          kube-system
	36dc12556790e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           48 seconds ago      Running             coredns                     1                   e713f2d887381       coredns-66bc5c9577-pkngt                               kube-system
	39e0b2fc4572e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   6815d86beab2a       busybox                                                default
	3ea7de269e8e6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           48 seconds ago      Running             kube-proxy                  1                   c8bb45b09c734       kube-proxy-7ncfq                                       kube-system
	717bbd5246f66       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           48 seconds ago      Exited              storage-provisioner         1                   e8c3902df90b2       storage-provisioner                                    kube-system
	4516831cebdb2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           56 seconds ago      Running             etcd                        1                   d59b243572465       etcd-default-k8s-diff-port-579203                      kube-system
	34a04e8a92683       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           56 seconds ago      Running             kube-apiserver              1                   111d547c1a8a6       kube-apiserver-default-k8s-diff-port-579203            kube-system
	3803cdc1a2993       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           56 seconds ago      Running             kube-controller-manager     1                   2934feb29a5c7       kube-controller-manager-default-k8s-diff-port-579203   kube-system
	1f1f933b71826       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           56 seconds ago      Running             kube-scheduler              1                   acefce472ced4       kube-scheduler-default-k8s-diff-port-579203            kube-system
	
	
	==> coredns [36dc12556790ec62ebafc51adfeddf981db6efc365694b45844fc58332452d44] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46725 - 14385 "HINFO IN 5227044846904803637.3225010015740555538. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056835043s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-579203
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-579203
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=default-k8s-diff-port-579203
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T03_00_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 03:00:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-579203
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 03:02:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 03:02:10 +0000   Wed, 19 Nov 2025 03:00:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 03:02:10 +0000   Wed, 19 Nov 2025 03:00:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 03:02:10 +0000   Wed, 19 Nov 2025 03:00:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 03:02:10 +0000   Wed, 19 Nov 2025 03:00:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-579203
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                7a64d282-4275-4f3a-a03c-1a14359e0c92
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-pkngt                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m14s
	  kube-system                 etcd-default-k8s-diff-port-579203                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m18s
	  kube-system                 kindnet-bt849                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m14s
	  kube-system                 kube-apiserver-default-k8s-diff-port-579203             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-579203    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-proxy-7ncfq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-scheduler-default-k8s-diff-port-579203             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-57qxx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7sz62                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m12s                  kube-proxy       
	  Normal   Starting                 46s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m30s (x8 over 2m30s)  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x8 over 2m30s)  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m19s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m19s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m18s                  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m18s                  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m18s                  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m15s                  node-controller  Node default-k8s-diff-port-579203 event: Registered Node default-k8s-diff-port-579203 in Controller
	  Normal   NodeReady                93s                    kubelet          Node default-k8s-diff-port-579203 status is now: NodeReady
	  Normal   Starting                 58s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 58s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  57s (x8 over 57s)      kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x8 over 57s)      kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x8 over 57s)      kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           46s                    node-controller  Node default-k8s-diff-port-579203 event: Registered Node default-k8s-diff-port-579203 in Controller
	
	
	==> dmesg <==
	[Nov19 02:38] overlayfs: idmapped layers are currently not supported
	[Nov19 02:39] overlayfs: idmapped layers are currently not supported
	[Nov19 02:41] overlayfs: idmapped layers are currently not supported
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	[Nov19 03:00] overlayfs: idmapped layers are currently not supported
	[  +8.385032] overlayfs: idmapped layers are currently not supported
	[Nov19 03:01] overlayfs: idmapped layers are currently not supported
	[  +9.842210] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4516831cebdb2595b82a89b6272a4678df2d23122cf9ae52b8b5ae44bd439756] <==
	{"level":"warn","ts":"2025-11-19T03:01:35.476184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.484382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.534526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.546960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.579169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.606455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.639518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.656908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.668989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.726169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.785347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.820170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.870606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.915578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.949603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.975691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.024343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.046104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.142893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.153691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.249788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.286192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.323909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.352050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.529800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33124","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:02:29 up 10:44,  0 user,  load average: 3.76, 3.42, 2.75
	Linux default-k8s-diff-port-579203 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cc12ce0d09f2f1fda420bf8fe3582af2e4d897fbce86ad179d3548f3c7dd46f7] <==
	I1119 03:01:40.792340       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 03:01:40.796228       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 03:01:40.805891       1 main.go:148] setting mtu 1500 for CNI 
	I1119 03:01:40.805911       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 03:01:40.805923       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T03:01:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 03:01:40.972388       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 03:01:40.972406       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 03:01:40.972414       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 03:01:40.972685       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 03:02:10.973162       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 03:02:10.973274       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 03:02:10.973315       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 03:02:10.973362       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 03:02:12.472861       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 03:02:12.472932       1 metrics.go:72] Registering metrics
	I1119 03:02:12.473039       1 controller.go:711] "Syncing nftables rules"
	I1119 03:02:20.975397       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 03:02:20.975474       1 main.go:301] handling current node
	
	
	==> kube-apiserver [34a04e8a9268354f0b56354ac57651328f516cc508f9fa0c077c3b4d4336b5ac] <==
	I1119 03:01:39.200622       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 03:01:39.200629       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 03:01:39.210295       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 03:01:39.210342       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 03:01:39.210869       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 03:01:39.210913       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 03:01:39.249053       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 03:01:39.255934       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 03:01:39.286529       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 03:01:39.273060       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 03:01:39.306017       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 03:01:39.373382       1 cache.go:39] Caches are synced for autoregister controller
	I1119 03:01:39.415441       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 03:01:39.416053       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 03:01:39.484965       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1119 03:01:39.544584       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 03:01:41.013235       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 03:01:41.245627       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 03:01:41.395640       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 03:01:41.444487       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 03:01:41.614555       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.72.193"}
	I1119 03:01:41.653126       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.132.57"}
	I1119 03:01:44.064236       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 03:01:44.402333       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 03:01:44.499003       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3803cdc1a2993683debdee19a3b01fb09e7c32a9d12eb84a9436d969662cea8a] <==
	I1119 03:01:43.998233       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 03:01:43.998263       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 03:01:44.000493       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 03:01:44.001716       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 03:01:44.003416       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 03:01:44.003542       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:01:44.009619       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 03:01:44.009727       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 03:01:44.013756       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 03:01:44.014815       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 03:01:44.019901       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 03:01:44.021750       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 03:01:44.021907       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 03:01:44.021966       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 03:01:44.022036       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 03:01:44.028322       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 03:01:44.030671       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 03:01:44.049583       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 03:01:44.053562       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 03:01:44.057463       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 03:01:44.065914       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 03:01:44.066062       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 03:01:44.081612       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:01:44.410306       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1119 03:01:44.412642       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [3ea7de269e8e6d7b9b64192a351808f1a03a33517868461ef84dc108d46883a5] <==
	I1119 03:01:41.800409       1 server_linux.go:53] "Using iptables proxy"
	I1119 03:01:41.979556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 03:01:42.093319       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 03:01:42.093372       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 03:01:42.093473       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 03:01:42.561909       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 03:01:42.562036       1 server_linux.go:132] "Using iptables Proxier"
	I1119 03:01:42.692526       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 03:01:42.692942       1 server.go:527] "Version info" version="v1.34.1"
	I1119 03:01:42.693210       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:01:42.706321       1 config.go:200] "Starting service config controller"
	I1119 03:01:42.706389       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 03:01:42.706441       1 config.go:106] "Starting endpoint slice config controller"
	I1119 03:01:42.706467       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 03:01:42.731566       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 03:01:42.732447       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 03:01:42.733244       1 config.go:309] "Starting node config controller"
	I1119 03:01:42.739690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 03:01:42.739762       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 03:01:42.807183       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 03:01:42.833452       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 03:01:42.844245       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1f1f933b7182604f83325a95fc3ff39e0799211227f9d528ab807a128acc0a96] <==
	I1119 03:01:40.724127       1 serving.go:386] Generated self-signed cert in-memory
	I1119 03:01:42.521071       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 03:01:42.521162       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:01:42.529412       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 03:01:42.531887       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:01:42.538129       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:01:42.531844       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 03:01:42.538158       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 03:01:42.531902       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:01:42.538936       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:01:42.531918       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 03:01:42.641305       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:01:42.646593       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 03:01:42.651200       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 19 03:01:44 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:44.448819     783 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aea40428-2b0c-4c57-8708-f8b56e473799-kube-api-access-hs999 podName:aea40428-2b0c-4c57-8708-f8b56e473799 nodeName:}" failed. No retries permitted until 2025-11-19 03:01:44.948793285 +0000 UTC m=+13.087466182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hs999" (UniqueName: "kubernetes.io/projected/aea40428-2b0c-4c57-8708-f8b56e473799-kube-api-access-hs999") pod "dashboard-metrics-scraper-6ffb444bf9-57qxx" (UID: "aea40428-2b0c-4c57-8708-f8b56e473799") : configmap "kube-root-ca.crt" not found
	Nov 19 03:01:44 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:44.452415     783 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 03:01:44 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:44.452455     783 projected.go:196] Error preparing data for projected volume kube-api-access-22bsf for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7sz62: configmap "kube-root-ca.crt" not found
	Nov 19 03:01:44 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:44.452514     783 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e8cb514-f8db-4efe-8e51-f6de4fd4b53f-kube-api-access-22bsf podName:2e8cb514-f8db-4efe-8e51-f6de4fd4b53f nodeName:}" failed. No retries permitted until 2025-11-19 03:01:44.952496728 +0000 UTC m=+13.091169633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-22bsf" (UniqueName: "kubernetes.io/projected/2e8cb514-f8db-4efe-8e51-f6de4fd4b53f-kube-api-access-22bsf") pod "kubernetes-dashboard-855c9754f9-7sz62" (UID: "2e8cb514-f8db-4efe-8e51-f6de4fd4b53f") : configmap "kube-root-ca.crt" not found
	Nov 19 03:01:45 default-k8s-diff-port-579203 kubelet[783]: W1119 03:01:45.225677     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/crio-d9d09765bae3989b896848db8ac82025fb98160d71a54dfca747a533e9a092da WatchSource:0}: Error finding container d9d09765bae3989b896848db8ac82025fb98160d71a54dfca747a533e9a092da: Status 404 returned error can't find the container with id d9d09765bae3989b896848db8ac82025fb98160d71a54dfca747a533e9a092da
	Nov 19 03:01:45 default-k8s-diff-port-579203 kubelet[783]: W1119 03:01:45.276219     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/crio-6072879a2f315e3c62a808fd67e562ca0ab823a5bccf603f1f3aa7466f3ecc54 WatchSource:0}: Error finding container 6072879a2f315e3c62a808fd67e562ca0ab823a5bccf603f1f3aa7466f3ecc54: Status 404 returned error can't find the container with id 6072879a2f315e3c62a808fd67e562ca0ab823a5bccf603f1f3aa7466f3ecc54
	Nov 19 03:01:52 default-k8s-diff-port-579203 kubelet[783]: I1119 03:01:52.688502     783 scope.go:117] "RemoveContainer" containerID="1bff5cfaecfb7dcf74d354b0ea4c8d3fed138fe0b53c189b453157ac5a6c737a"
	Nov 19 03:01:53 default-k8s-diff-port-579203 kubelet[783]: I1119 03:01:53.693880     783 scope.go:117] "RemoveContainer" containerID="1bff5cfaecfb7dcf74d354b0ea4c8d3fed138fe0b53c189b453157ac5a6c737a"
	Nov 19 03:01:53 default-k8s-diff-port-579203 kubelet[783]: I1119 03:01:53.694169     783 scope.go:117] "RemoveContainer" containerID="91fbb59ec8bd1d126fdd3e4692f1d3c63209341307ac59801f125d3fc0243304"
	Nov 19 03:01:53 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:53.694323     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57qxx_kubernetes-dashboard(aea40428-2b0c-4c57-8708-f8b56e473799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57qxx" podUID="aea40428-2b0c-4c57-8708-f8b56e473799"
	Nov 19 03:01:54 default-k8s-diff-port-579203 kubelet[783]: I1119 03:01:54.698591     783 scope.go:117] "RemoveContainer" containerID="91fbb59ec8bd1d126fdd3e4692f1d3c63209341307ac59801f125d3fc0243304"
	Nov 19 03:01:54 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:54.698754     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57qxx_kubernetes-dashboard(aea40428-2b0c-4c57-8708-f8b56e473799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57qxx" podUID="aea40428-2b0c-4c57-8708-f8b56e473799"
	Nov 19 03:01:55 default-k8s-diff-port-579203 kubelet[783]: I1119 03:01:55.701200     783 scope.go:117] "RemoveContainer" containerID="91fbb59ec8bd1d126fdd3e4692f1d3c63209341307ac59801f125d3fc0243304"
	Nov 19 03:01:55 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:55.701357     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57qxx_kubernetes-dashboard(aea40428-2b0c-4c57-8708-f8b56e473799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57qxx" podUID="aea40428-2b0c-4c57-8708-f8b56e473799"
	Nov 19 03:02:08 default-k8s-diff-port-579203 kubelet[783]: I1119 03:02:08.147203     783 scope.go:117] "RemoveContainer" containerID="91fbb59ec8bd1d126fdd3e4692f1d3c63209341307ac59801f125d3fc0243304"
	Nov 19 03:02:08 default-k8s-diff-port-579203 kubelet[783]: I1119 03:02:08.747198     783 scope.go:117] "RemoveContainer" containerID="91fbb59ec8bd1d126fdd3e4692f1d3c63209341307ac59801f125d3fc0243304"
	Nov 19 03:02:08 default-k8s-diff-port-579203 kubelet[783]: I1119 03:02:08.747526     783 scope.go:117] "RemoveContainer" containerID="0b0a1ea8af8bea488a5668e7548a35e9a9b3f133dd62d0b5eec95839647aa3a9"
	Nov 19 03:02:08 default-k8s-diff-port-579203 kubelet[783]: E1119 03:02:08.747778     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57qxx_kubernetes-dashboard(aea40428-2b0c-4c57-8708-f8b56e473799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57qxx" podUID="aea40428-2b0c-4c57-8708-f8b56e473799"
	Nov 19 03:02:08 default-k8s-diff-port-579203 kubelet[783]: I1119 03:02:08.769784     783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7sz62" podStartSLOduration=10.080701345 podStartE2EDuration="24.768365457s" podCreationTimestamp="2025-11-19 03:01:44 +0000 UTC" firstStartedPulling="2025-11-19 03:01:45.284943555 +0000 UTC m=+13.423616452" lastFinishedPulling="2025-11-19 03:01:59.972607667 +0000 UTC m=+28.111280564" observedRunningTime="2025-11-19 03:02:00.745301926 +0000 UTC m=+28.883974831" watchObservedRunningTime="2025-11-19 03:02:08.768365457 +0000 UTC m=+36.907038363"
	Nov 19 03:02:11 default-k8s-diff-port-579203 kubelet[783]: I1119 03:02:11.757603     783 scope.go:117] "RemoveContainer" containerID="717bbd5246f66b2cc923d8f5ba5038836144f1b64ec4fff2f37f5caf1afef446"
	Nov 19 03:02:15 default-k8s-diff-port-579203 kubelet[783]: I1119 03:02:15.143103     783 scope.go:117] "RemoveContainer" containerID="0b0a1ea8af8bea488a5668e7548a35e9a9b3f133dd62d0b5eec95839647aa3a9"
	Nov 19 03:02:15 default-k8s-diff-port-579203 kubelet[783]: E1119 03:02:15.143310     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57qxx_kubernetes-dashboard(aea40428-2b0c-4c57-8708-f8b56e473799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57qxx" podUID="aea40428-2b0c-4c57-8708-f8b56e473799"
	Nov 19 03:02:26 default-k8s-diff-port-579203 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 03:02:26 default-k8s-diff-port-579203 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 03:02:26 default-k8s-diff-port-579203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [60ef01ce19f59b14a39c9d03bdda2fb6b702a2cd8a2bdca3dce9e879e6a33576] <==
	2025/11/19 03:02:00 Using namespace: kubernetes-dashboard
	2025/11/19 03:02:00 Using in-cluster config to connect to apiserver
	2025/11/19 03:02:00 Using secret token for csrf signing
	2025/11/19 03:02:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 03:02:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 03:02:00 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 03:02:00 Generating JWE encryption key
	2025/11/19 03:02:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 03:02:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 03:02:00 Initializing JWE encryption key from synchronized object
	2025/11/19 03:02:00 Creating in-cluster Sidecar client
	2025/11/19 03:02:00 Serving insecurely on HTTP port: 9090
	2025/11/19 03:02:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 03:02:00 Starting overwatch
	
	
	==> storage-provisioner [717bbd5246f66b2cc923d8f5ba5038836144f1b64ec4fff2f37f5caf1afef446] <==
	I1119 03:01:40.905705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 03:02:10.916520       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [dc2899265d6b0abb8bb121734cf5340b1c1e2eaaeacba61cd625f0fd5849b46a] <==
	I1119 03:02:11.811246       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 03:02:11.830001       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 03:02:11.830146       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 03:02:11.833226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:15.288162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:19.548434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:23.147828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:26.201324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:29.223053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:29.229633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:02:29.229791       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 03:02:29.230478       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9af3f9a5-889b-4042-b73a-79c73b0a4e8f", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-579203_a6ff1c61-7f89-4f75-a41a-30366bf4b2e0 became leader
	I1119 03:02:29.230507       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-579203_a6ff1c61-7f89-4f75-a41a-30366bf4b2e0!
	W1119 03:02:29.234967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:29.247717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:02:29.331021       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-579203_a6ff1c61-7f89-4f75-a41a-30366bf4b2e0!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-579203 -n default-k8s-diff-port-579203
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-579203 -n default-k8s-diff-port-579203: exit status 2 (360.315255ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-579203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-579203
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-579203:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5",
	        "Created": "2025-11-19T02:59:35.831812475Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1656929,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T03:01:24.875954361Z",
	            "FinishedAt": "2025-11-19T03:01:24.078571417Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/hosts",
	        "LogPath": "/var/lib/docker/containers/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5-json.log",
	        "Name": "/default-k8s-diff-port-579203",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-579203:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-579203",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5",
	                "LowerDir": "/var/lib/docker/overlay2/d622a4d4992266276def27975e825f419a488b9d81d50dcaf7f9bc257af61d59-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d622a4d4992266276def27975e825f419a488b9d81d50dcaf7f9bc257af61d59/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d622a4d4992266276def27975e825f419a488b9d81d50dcaf7f9bc257af61d59/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d622a4d4992266276def27975e825f419a488b9d81d50dcaf7f9bc257af61d59/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-579203",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-579203/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-579203",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-579203",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-579203",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "816ad20de6afc90bc5c35d80205e8832dbe6086051bc3548b5f345292d7c6451",
	            "SandboxKey": "/var/run/docker/netns/816ad20de6af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34915"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34916"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34919"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34917"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34918"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-579203": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:c9:36:3a:6d:68",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8f7be654242a82c1a39285c06387290e9e449b11aff81f581eff53904d206cfb",
	                    "EndpointID": "852d362489c80720fccc4ed592bf50cc12bdb62196065f35866ad65cf3ebcf32",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-579203",
	                        "d6ecbc325578"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-579203 -n default-k8s-diff-port-579203
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-579203 -n default-k8s-diff-port-579203: exit status 2 (383.998531ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-579203 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-579203 logs -n 25: (1.280415007s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-702842 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-702842          │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ delete  │ -p cert-options-702842                                                                                                                                                                                                                        │ cert-options-702842          │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:56 UTC │
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:56 UTC │ 19 Nov 25 02:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-525469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │                     │
	│ stop    │ -p old-k8s-version-525469 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-525469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:58 UTC │
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:59 UTC │
	│ image   │ old-k8s-version-525469 image list --format=json                                                                                                                                                                                               │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ pause   │ -p old-k8s-version-525469 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │                     │
	│ start   │ -p cert-expiration-422184 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ delete  │ -p cert-expiration-422184                                                                                                                                                                                                                     │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-579203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-579203 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-592123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p embed-certs-592123 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-579203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ addons  │ enable dashboard -p embed-certs-592123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ image   │ default-k8s-diff-port-579203 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p default-k8s-diff-port-579203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 03:01:31
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 03:01:31.694099 1658016 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:01:31.694285 1658016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:01:31.694312 1658016 out.go:374] Setting ErrFile to fd 2...
	I1119 03:01:31.694333 1658016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:01:31.694632 1658016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:01:31.695116 1658016 out.go:368] Setting JSON to false
	I1119 03:01:31.696038 1658016 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38619,"bootTime":1763482673,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 03:01:31.696425 1658016 start.go:143] virtualization:  
	I1119 03:01:31.700158 1658016 out.go:179] * [embed-certs-592123] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 03:01:31.704182 1658016 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 03:01:31.704392 1658016 notify.go:221] Checking for updates...
	I1119 03:01:31.710069 1658016 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 03:01:31.712926 1658016 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:01:31.715720 1658016 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 03:01:31.718664 1658016 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 03:01:31.721447 1658016 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 03:01:31.724877 1658016 config.go:182] Loaded profile config "embed-certs-592123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:01:31.725487 1658016 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 03:01:31.778202 1658016 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 03:01:31.778328 1658016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:01:31.876699 1658016 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 03:01:31.865000467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:01:31.876810 1658016 docker.go:319] overlay module found
	I1119 03:01:31.879893 1658016 out.go:179] * Using the docker driver based on existing profile
	I1119 03:01:31.882733 1658016 start.go:309] selected driver: docker
	I1119 03:01:31.882756 1658016 start.go:930] validating driver "docker" against &{Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:01:31.882852 1658016 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 03:01:31.883508 1658016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:01:31.999624 1658016 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 03:01:31.986051413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:01:31.999963 1658016 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:01:31.999988 1658016 cni.go:84] Creating CNI manager for ""
	I1119 03:01:32.000043 1658016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:01:32.000082 1658016 start.go:353] cluster config:
	{Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:01:32.003455 1658016 out.go:179] * Starting "embed-certs-592123" primary control-plane node in "embed-certs-592123" cluster
	I1119 03:01:32.006956 1658016 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 03:01:32.010136 1658016 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 03:01:32.013087 1658016 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:01:32.013131 1658016 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 03:01:32.013142 1658016 cache.go:65] Caching tarball of preloaded images
	I1119 03:01:32.013223 1658016 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 03:01:32.013232 1658016 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 03:01:32.013358 1658016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/config.json ...
	I1119 03:01:32.013613 1658016 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 03:01:32.043034 1658016 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 03:01:32.043057 1658016 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 03:01:32.043069 1658016 cache.go:243] Successfully downloaded all kic artifacts
	I1119 03:01:32.043094 1658016 start.go:360] acquireMachinesLock for embed-certs-592123: {Name:mkad274f419d3f3256db7dae28b742586dc2ebd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:01:32.043146 1658016 start.go:364] duration metric: took 35.084µs to acquireMachinesLock for "embed-certs-592123"
	I1119 03:01:32.043166 1658016 start.go:96] Skipping create...Using existing machine configuration
	I1119 03:01:32.043171 1658016 fix.go:54] fixHost starting: 
	I1119 03:01:32.043430 1658016 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:01:32.073987 1658016 fix.go:112] recreateIfNeeded on embed-certs-592123: state=Stopped err=<nil>
	W1119 03:01:32.074014 1658016 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 03:01:31.382636 1656802 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-579203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:01:31.408700 1656802 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 03:01:31.418901 1656802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:01:31.429751 1656802 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-579203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-579203 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 03:01:31.429874 1656802 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:01:31.429932 1656802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:01:31.486743 1656802 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:01:31.486766 1656802 crio.go:433] Images already preloaded, skipping extraction
	I1119 03:01:31.486818 1656802 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:01:31.524977 1656802 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:01:31.525001 1656802 cache_images.go:86] Images are preloaded, skipping loading
	I1119 03:01:31.525009 1656802 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1119 03:01:31.525101 1656802 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-579203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-579203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 03:01:31.525185 1656802 ssh_runner.go:195] Run: crio config
	I1119 03:01:31.608065 1656802 cni.go:84] Creating CNI manager for ""
	I1119 03:01:31.608085 1656802 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:01:31.608109 1656802 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 03:01:31.608132 1656802 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-579203 NodeName:default-k8s-diff-port-579203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 03:01:31.608267 1656802 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-579203"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 03:01:31.608333 1656802 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 03:01:31.616713 1656802 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 03:01:31.616796 1656802 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 03:01:31.625905 1656802 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 03:01:31.640197 1656802 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 03:01:31.654632 1656802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1119 03:01:31.669129 1656802 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 03:01:31.673150 1656802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:01:31.684012 1656802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:01:31.842706 1656802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:01:31.862589 1656802 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203 for IP: 192.168.85.2
	I1119 03:01:31.862611 1656802 certs.go:195] generating shared ca certs ...
	I1119 03:01:31.862626 1656802 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:31.862778 1656802 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 03:01:31.862824 1656802 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 03:01:31.862834 1656802 certs.go:257] generating profile certs ...
	I1119 03:01:31.862921 1656802 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.key
	I1119 03:01:31.863016 1656802 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key.1f3db3c7
	I1119 03:01:31.863059 1656802 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.key
	I1119 03:01:31.863172 1656802 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 03:01:31.863209 1656802 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 03:01:31.863219 1656802 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 03:01:31.863244 1656802 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 03:01:31.863266 1656802 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 03:01:31.863287 1656802 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 03:01:31.863333 1656802 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:01:31.863893 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 03:01:31.919072 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 03:01:31.961266 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 03:01:32.004872 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 03:01:32.062731 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 03:01:32.113061 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 03:01:32.152998 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 03:01:32.175738 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 03:01:32.211491 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 03:01:32.240771 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 03:01:32.291001 1656802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 03:01:32.312199 1656802 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 03:01:32.328177 1656802 ssh_runner.go:195] Run: openssl version
	I1119 03:01:32.335811 1656802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 03:01:32.347836 1656802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:01:32.352606 1656802 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:01:32.352723 1656802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:01:32.399906 1656802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 03:01:32.418016 1656802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 03:01:32.431940 1656802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 03:01:32.436366 1656802 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 03:01:32.436425 1656802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 03:01:32.525416 1656802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 03:01:32.534432 1656802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 03:01:32.546942 1656802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 03:01:32.552347 1656802 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 03:01:32.552406 1656802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 03:01:32.654831 1656802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 03:01:32.679397 1656802 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 03:01:32.693377 1656802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 03:01:32.808831 1656802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 03:01:32.877180 1656802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 03:01:33.051675 1656802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 03:01:33.127180 1656802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 03:01:33.200498 1656802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 03:01:33.262031 1656802 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-579203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-579203 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:01:33.262163 1656802 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 03:01:33.262251 1656802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 03:01:33.312927 1656802 cri.go:89] found id: "4516831cebdb2595b82a89b6272a4678df2d23122cf9ae52b8b5ae44bd439756"
	I1119 03:01:33.312997 1656802 cri.go:89] found id: "34a04e8a9268354f0b56354ac57651328f516cc508f9fa0c077c3b4d4336b5ac"
	I1119 03:01:33.313027 1656802 cri.go:89] found id: "3803cdc1a2993683debdee19a3b01fb09e7c32a9d12eb84a9436d969662cea8a"
	I1119 03:01:33.313044 1656802 cri.go:89] found id: "1f1f933b7182604f83325a95fc3ff39e0799211227f9d528ab807a128acc0a96"
	I1119 03:01:33.313062 1656802 cri.go:89] found id: ""
	I1119 03:01:33.313127 1656802 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 03:01:33.327507 1656802 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:01:33Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:01:33.327640 1656802 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 03:01:33.340875 1656802 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 03:01:33.340932 1656802 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 03:01:33.340994 1656802 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 03:01:33.353524 1656802 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 03:01:33.353973 1656802 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-579203" does not appear in /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:01:33.354133 1656802 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-1463525/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-579203" cluster setting kubeconfig missing "default-k8s-diff-port-579203" context setting]
	I1119 03:01:33.354443 1656802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:33.355806 1656802 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 03:01:33.366501 1656802 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 03:01:33.366570 1656802 kubeadm.go:602] duration metric: took 25.618557ms to restartPrimaryControlPlane
	I1119 03:01:33.366594 1656802 kubeadm.go:403] duration metric: took 104.5714ms to StartCluster
	I1119 03:01:33.366623 1656802 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:33.366710 1656802 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:01:33.367353 1656802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:33.367575 1656802 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:01:33.367877 1656802 config.go:182] Loaded profile config "default-k8s-diff-port-579203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:01:33.367951 1656802 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:01:33.368075 1656802 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-579203"
	I1119 03:01:33.368105 1656802 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-579203"
	W1119 03:01:33.368131 1656802 addons.go:248] addon storage-provisioner should already be in state true
	I1119 03:01:33.368165 1656802 host.go:66] Checking if "default-k8s-diff-port-579203" exists ...
	I1119 03:01:33.368781 1656802 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:01:33.368945 1656802 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-579203"
	I1119 03:01:33.368980 1656802 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-579203"
	W1119 03:01:33.369000 1656802 addons.go:248] addon dashboard should already be in state true
	I1119 03:01:33.369036 1656802 host.go:66] Checking if "default-k8s-diff-port-579203" exists ...
	I1119 03:01:33.369232 1656802 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-579203"
	I1119 03:01:33.369256 1656802 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-579203"
	I1119 03:01:33.369481 1656802 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:01:33.369570 1656802 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:01:33.377538 1656802 out.go:179] * Verifying Kubernetes components...
	I1119 03:01:33.380901 1656802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:01:33.428249 1656802 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 03:01:33.430633 1656802 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-579203"
	W1119 03:01:33.430652 1656802 addons.go:248] addon default-storageclass should already be in state true
	I1119 03:01:33.430675 1656802 host.go:66] Checking if "default-k8s-diff-port-579203" exists ...
	I1119 03:01:33.431093 1656802 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-579203 --format={{.State.Status}}
	I1119 03:01:33.431241 1656802 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:01:33.436527 1656802 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 03:01:33.436640 1656802 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:01:33.436650 1656802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:01:33.436709 1656802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 03:01:33.439380 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 03:01:33.439406 1656802 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 03:01:33.439467 1656802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 03:01:33.482617 1656802 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:01:33.482637 1656802 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:01:33.482834 1656802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-579203
	I1119 03:01:33.495291 1656802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34915 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 03:01:33.501482 1656802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34915 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 03:01:33.523106 1656802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34915 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/default-k8s-diff-port-579203/id_rsa Username:docker}
	I1119 03:01:33.726999 1656802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:01:33.732353 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 03:01:33.732379 1656802 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 03:01:33.743826 1656802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:01:33.754537 1656802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:01:33.786983 1656802 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-579203" to be "Ready" ...
	I1119 03:01:33.795696 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 03:01:33.795716 1656802 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 03:01:33.841748 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 03:01:33.841773 1656802 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 03:01:33.902688 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 03:01:33.902718 1656802 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 03:01:33.991741 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 03:01:33.991766 1656802 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 03:01:34.135318 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 03:01:34.135341 1656802 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 03:01:34.156785 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 03:01:34.156810 1656802 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 03:01:34.201589 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 03:01:34.201614 1656802 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 03:01:34.228872 1656802 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:01:34.228897 1656802 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 03:01:34.265369 1656802 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:01:32.077538 1658016 out.go:252] * Restarting existing docker container for "embed-certs-592123" ...
	I1119 03:01:32.077619 1658016 cli_runner.go:164] Run: docker start embed-certs-592123
	I1119 03:01:32.422612 1658016 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:01:32.449537 1658016 kic.go:430] container "embed-certs-592123" state is running.
	I1119 03:01:32.449915 1658016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-592123
	I1119 03:01:32.480682 1658016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/config.json ...
	I1119 03:01:32.480950 1658016 machine.go:94] provisionDockerMachine start ...
	I1119 03:01:32.481018 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:32.509145 1658016 main.go:143] libmachine: Using SSH client type: native
	I1119 03:01:32.509977 1658016 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34920 <nil> <nil>}
	I1119 03:01:32.510001 1658016 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 03:01:32.510759 1658016 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 03:01:35.689449 1658016 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-592123
	
	I1119 03:01:35.689469 1658016 ubuntu.go:182] provisioning hostname "embed-certs-592123"
	I1119 03:01:35.689545 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:35.716862 1658016 main.go:143] libmachine: Using SSH client type: native
	I1119 03:01:35.717166 1658016 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34920 <nil> <nil>}
	I1119 03:01:35.717177 1658016 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-592123 && echo "embed-certs-592123" | sudo tee /etc/hostname
	I1119 03:01:35.916932 1658016 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-592123
	
	I1119 03:01:35.917083 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:35.939266 1658016 main.go:143] libmachine: Using SSH client type: native
	I1119 03:01:35.939572 1658016 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34920 <nil> <nil>}
	I1119 03:01:35.939676 1658016 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-592123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-592123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-592123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 03:01:36.113904 1658016 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 03:01:36.113929 1658016 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 03:01:36.113990 1658016 ubuntu.go:190] setting up certificates
	I1119 03:01:36.114001 1658016 provision.go:84] configureAuth start
	I1119 03:01:36.114075 1658016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-592123
	I1119 03:01:36.145721 1658016 provision.go:143] copyHostCerts
	I1119 03:01:36.145794 1658016 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 03:01:36.145816 1658016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 03:01:36.145903 1658016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 03:01:36.146003 1658016 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 03:01:36.146015 1658016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 03:01:36.146042 1658016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 03:01:36.146101 1658016 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 03:01:36.146111 1658016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 03:01:36.146146 1658016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 03:01:36.146200 1658016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.embed-certs-592123 san=[127.0.0.1 192.168.76.2 embed-certs-592123 localhost minikube]
	I1119 03:01:38.867365 1656802 node_ready.go:49] node "default-k8s-diff-port-579203" is "Ready"
	I1119 03:01:38.867396 1656802 node_ready.go:38] duration metric: took 5.080374221s for node "default-k8s-diff-port-579203" to be "Ready" ...
	I1119 03:01:38.867410 1656802 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:01:38.867468 1656802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:01:39.376543 1656802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.63267581s)
	I1119 03:01:37.481002 1658016 provision.go:177] copyRemoteCerts
	I1119 03:01:37.481123 1658016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 03:01:37.481187 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:37.498613 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:37.622071 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 03:01:37.659368 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 03:01:37.683985 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 03:01:37.711470 1658016 provision.go:87] duration metric: took 1.597442922s to configureAuth
	I1119 03:01:37.711500 1658016 ubuntu.go:206] setting minikube options for container-runtime
	I1119 03:01:37.711743 1658016 config.go:182] Loaded profile config "embed-certs-592123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:01:37.711889 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:37.734957 1658016 main.go:143] libmachine: Using SSH client type: native
	I1119 03:01:37.735283 1658016 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34920 <nil> <nil>}
	I1119 03:01:37.735301 1658016 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 03:01:38.285895 1658016 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 03:01:38.285922 1658016 machine.go:97] duration metric: took 5.804954413s to provisionDockerMachine
	I1119 03:01:38.285933 1658016 start.go:293] postStartSetup for "embed-certs-592123" (driver="docker")
	I1119 03:01:38.285967 1658016 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 03:01:38.286049 1658016 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 03:01:38.286112 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:38.312919 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:38.443748 1658016 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 03:01:38.447660 1658016 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 03:01:38.447689 1658016 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 03:01:38.447700 1658016 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 03:01:38.447753 1658016 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 03:01:38.447832 1658016 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 03:01:38.447940 1658016 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 03:01:38.461273 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:01:38.510174 1658016 start.go:296] duration metric: took 224.22474ms for postStartSetup
	I1119 03:01:38.510274 1658016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 03:01:38.510320 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:38.539011 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:38.663137 1658016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 03:01:38.674018 1658016 fix.go:56] duration metric: took 6.630839224s for fixHost
	I1119 03:01:38.674045 1658016 start.go:83] releasing machines lock for "embed-certs-592123", held for 6.630889873s
	I1119 03:01:38.674129 1658016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-592123
	I1119 03:01:38.702738 1658016 ssh_runner.go:195] Run: cat /version.json
	I1119 03:01:38.702792 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:38.703037 1658016 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 03:01:38.703099 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:38.735230 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:38.749542 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:38.891506 1658016 ssh_runner.go:195] Run: systemctl --version
	I1119 03:01:39.029294 1658016 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 03:01:39.115959 1658016 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 03:01:39.123862 1658016 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 03:01:39.123949 1658016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 03:01:39.140196 1658016 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 03:01:39.140222 1658016 start.go:496] detecting cgroup driver to use...
	I1119 03:01:39.140257 1658016 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 03:01:39.140335 1658016 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 03:01:39.168991 1658016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 03:01:39.192845 1658016 docker.go:218] disabling cri-docker service (if available) ...
	I1119 03:01:39.192949 1658016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 03:01:39.214043 1658016 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 03:01:39.240416 1658016 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 03:01:39.443354 1658016 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 03:01:39.716017 1658016 docker.go:234] disabling docker service ...
	I1119 03:01:39.716110 1658016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 03:01:39.747581 1658016 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 03:01:39.774942 1658016 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 03:01:40.008458 1658016 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 03:01:40.213317 1658016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 03:01:40.238337 1658016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 03:01:40.268239 1658016 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 03:01:40.268353 1658016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.278269 1658016 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 03:01:40.278391 1658016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.294015 1658016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.303202 1658016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.318174 1658016 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 03:01:40.328359 1658016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.344557 1658016 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.353087 1658016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:01:40.365543 1658016 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 03:01:40.383809 1658016 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 03:01:40.396307 1658016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:01:40.592886 1658016 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 03:01:40.826288 1658016 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 03:01:40.826370 1658016 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 03:01:40.832512 1658016 start.go:564] Will wait 60s for crictl version
	I1119 03:01:40.832625 1658016 ssh_runner.go:195] Run: which crictl
	I1119 03:01:40.838189 1658016 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 03:01:40.876669 1658016 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 03:01:40.876764 1658016 ssh_runner.go:195] Run: crio --version
	I1119 03:01:40.948048 1658016 ssh_runner.go:195] Run: crio --version
	I1119 03:01:40.996697 1658016 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 03:01:41.430835 1656802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.676261904s)
	I1119 03:01:41.661740 1656802 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.396328561s)
	I1119 03:01:41.661898 1656802 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.794418905s)
	I1119 03:01:41.661913 1656802 api_server.go:72] duration metric: took 8.294288111s to wait for apiserver process to appear ...
	I1119 03:01:41.661919 1656802 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:01:41.661935 1656802 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1119 03:01:41.664921 1656802 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-579203 addons enable metrics-server
	
	I1119 03:01:41.667834 1656802 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1119 03:01:40.999730 1658016 cli_runner.go:164] Run: docker network inspect embed-certs-592123 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:01:41.022357 1658016 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 03:01:41.026453 1658016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:01:41.038517 1658016 kubeadm.go:884] updating cluster {Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 03:01:41.038634 1658016 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:01:41.038710 1658016 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:01:41.092718 1658016 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:01:41.092746 1658016 crio.go:433] Images already preloaded, skipping extraction
	I1119 03:01:41.092805 1658016 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:01:41.148452 1658016 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:01:41.148473 1658016 cache_images.go:86] Images are preloaded, skipping loading
	I1119 03:01:41.148481 1658016 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 03:01:41.148578 1658016 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-592123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 03:01:41.148659 1658016 ssh_runner.go:195] Run: crio config
	I1119 03:01:41.262176 1658016 cni.go:84] Creating CNI manager for ""
	I1119 03:01:41.262209 1658016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:01:41.262232 1658016 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 03:01:41.262256 1658016 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-592123 NodeName:embed-certs-592123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 03:01:41.262400 1658016 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-592123"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 03:01:41.262492 1658016 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 03:01:41.271122 1658016 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 03:01:41.271212 1658016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 03:01:41.285247 1658016 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 03:01:41.315966 1658016 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 03:01:41.335409 1658016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1119 03:01:41.353268 1658016 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 03:01:41.359773 1658016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:01:41.372085 1658016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:01:41.573466 1658016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:01:41.620128 1658016 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123 for IP: 192.168.76.2
	I1119 03:01:41.620157 1658016 certs.go:195] generating shared ca certs ...
	I1119 03:01:41.620173 1658016 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:41.620344 1658016 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 03:01:41.620398 1658016 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 03:01:41.620409 1658016 certs.go:257] generating profile certs ...
	I1119 03:01:41.620523 1658016 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/client.key
	I1119 03:01:41.620596 1658016 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key.9c644e00
	I1119 03:01:41.620640 1658016 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.key
	I1119 03:01:41.620774 1658016 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 03:01:41.620810 1658016 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 03:01:41.620830 1658016 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 03:01:41.620861 1658016 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 03:01:41.620890 1658016 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 03:01:41.620922 1658016 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 03:01:41.620969 1658016 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:01:41.621663 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 03:01:41.666747 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 03:01:41.706670 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 03:01:41.735378 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 03:01:41.789349 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 03:01:41.826329 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 03:01:41.869206 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 03:01:41.913013 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/embed-certs-592123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 03:01:41.969269 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 03:01:42.004775 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 03:01:42.031638 1658016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 03:01:42.055302 1658016 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 03:01:42.074752 1658016 ssh_runner.go:195] Run: openssl version
	I1119 03:01:42.083886 1658016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 03:01:42.100589 1658016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:01:42.109790 1658016 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:01:42.109932 1658016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:01:42.164052 1658016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 03:01:42.174595 1658016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 03:01:42.186485 1658016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 03:01:42.192616 1658016 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 03:01:42.192830 1658016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 03:01:42.243513 1658016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 03:01:42.254498 1658016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 03:01:42.267812 1658016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 03:01:42.273902 1658016 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 03:01:42.274057 1658016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 03:01:42.329694 1658016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 03:01:42.339413 1658016 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 03:01:42.345031 1658016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 03:01:42.390953 1658016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 03:01:42.496916 1658016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 03:01:42.617590 1658016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 03:01:42.702909 1658016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 03:01:42.819439 1658016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 03:01:42.930523 1658016 kubeadm.go:401] StartCluster: {Name:embed-certs-592123 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-592123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:01:42.930662 1658016 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 03:01:42.930769 1658016 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 03:01:43.018406 1658016 cri.go:89] found id: "28baf9cda670ab54ffce2ff7181d4841299d3d55c51eab8df2a52c1c366a4111"
	I1119 03:01:43.018465 1658016 cri.go:89] found id: "44051fa115dbdefd2547da0097f35a9d487cbcc9b4becc2a70f91a77a0d1da21"
	I1119 03:01:43.018493 1658016 cri.go:89] found id: "0c30389a4661b622b8e4e66ed3373832cf9f4abe199dc1ec782692aa5b76a699"
	I1119 03:01:43.018512 1658016 cri.go:89] found id: "50a2bdb9c67513a1526c7008d09101b3db95d7bac468c5e2f2f7dcda041de7b5"
	I1119 03:01:43.018538 1658016 cri.go:89] found id: ""
	I1119 03:01:43.018613 1658016 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 03:01:43.050006 1658016 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:01:43Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:01:43.050138 1658016 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 03:01:43.068372 1658016 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 03:01:43.068442 1658016 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 03:01:43.068517 1658016 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 03:01:43.086612 1658016 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 03:01:43.087281 1658016 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-592123" does not appear in /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:01:43.087609 1658016 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-1463525/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-592123" cluster setting kubeconfig missing "embed-certs-592123" context setting]
	I1119 03:01:43.088162 1658016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:43.089916 1658016 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 03:01:43.106949 1658016 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1119 03:01:43.107033 1658016 kubeadm.go:602] duration metric: took 38.571245ms to restartPrimaryControlPlane
	I1119 03:01:43.107058 1658016 kubeadm.go:403] duration metric: took 176.542666ms to StartCluster
	I1119 03:01:43.107087 1658016 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:43.107190 1658016 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:01:43.108563 1658016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:01:43.108856 1658016 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:01:43.109391 1658016 config.go:182] Loaded profile config "embed-certs-592123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:01:43.109384 1658016 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:01:43.109466 1658016 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-592123"
	I1119 03:01:43.109473 1658016 addons.go:70] Setting dashboard=true in profile "embed-certs-592123"
	I1119 03:01:43.109480 1658016 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-592123"
	W1119 03:01:43.109487 1658016 addons.go:248] addon storage-provisioner should already be in state true
	I1119 03:01:43.109488 1658016 addons.go:239] Setting addon dashboard=true in "embed-certs-592123"
	W1119 03:01:43.109494 1658016 addons.go:248] addon dashboard should already be in state true
	I1119 03:01:43.109570 1658016 host.go:66] Checking if "embed-certs-592123" exists ...
	I1119 03:01:43.109627 1658016 host.go:66] Checking if "embed-certs-592123" exists ...
	I1119 03:01:43.110054 1658016 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:01:43.110073 1658016 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:01:43.112556 1658016 addons.go:70] Setting default-storageclass=true in profile "embed-certs-592123"
	I1119 03:01:43.112588 1658016 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-592123"
	I1119 03:01:43.113501 1658016 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:01:43.151994 1658016 out.go:179] * Verifying Kubernetes components...
	I1119 03:01:43.157749 1658016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:01:43.157971 1658016 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 03:01:43.161116 1658016 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 03:01:43.173620 1658016 addons.go:239] Setting addon default-storageclass=true in "embed-certs-592123"
	W1119 03:01:43.173648 1658016 addons.go:248] addon default-storageclass should already be in state true
	I1119 03:01:43.173673 1658016 host.go:66] Checking if "embed-certs-592123" exists ...
	I1119 03:01:43.174122 1658016 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:01:43.175058 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 03:01:43.175081 1658016 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 03:01:43.175143 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:43.175260 1658016 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:01:41.670724 1656802 addons.go:515] duration metric: took 8.302736772s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1119 03:01:41.682414 1656802 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1119 03:01:41.684228 1656802 api_server.go:141] control plane version: v1.34.1
	I1119 03:01:41.684252 1656802 api_server.go:131] duration metric: took 22.327558ms to wait for apiserver health ...
	I1119 03:01:41.684261 1656802 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:01:41.722659 1656802 system_pods.go:59] 8 kube-system pods found
	I1119 03:01:41.722703 1656802 system_pods.go:61] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:41.722713 1656802 system_pods.go:61] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:01:41.722719 1656802 system_pods.go:61] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:01:41.722726 1656802 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 03:01:41.722732 1656802 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 03:01:41.722738 1656802 system_pods.go:61] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:01:41.722745 1656802 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:01:41.722758 1656802 system_pods.go:61] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:01:41.722766 1656802 system_pods.go:74] duration metric: took 38.49864ms to wait for pod list to return data ...
	I1119 03:01:41.722775 1656802 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:01:41.739110 1656802 default_sa.go:45] found service account: "default"
	I1119 03:01:41.739131 1656802 default_sa.go:55] duration metric: took 16.349743ms for default service account to be created ...
	I1119 03:01:41.739194 1656802 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 03:01:41.743937 1656802 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:41.744018 1656802 system_pods.go:89] "coredns-66bc5c9577-pkngt" [d74743aa-7170-415b-9f00-b196bc8b9837] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:41.744045 1656802 system_pods.go:89] "etcd-default-k8s-diff-port-579203" [e826f0a7-b445-41e7-a7b6-ef191991365e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:01:41.744066 1656802 system_pods.go:89] "kindnet-bt849" [5690abd0-63a3-4580-a0bf-a259dc29f6d0] Running
	I1119 03:01:41.744110 1656802 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-579203" [e50a666b-744d-415d-ac95-e502bf62a072] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 03:01:41.744131 1656802 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-579203" [28be9327-f878-4393-b4d3-dfe89f015c31] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 03:01:41.744167 1656802 system_pods.go:89] "kube-proxy-7ncfq" [2cd4821b-c2c9-4f47-b5de-93e55c8f8c38] Running
	I1119 03:01:41.744193 1656802 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-579203" [5b81d9f1-896a-4c4f-8c41-61b7b48d40ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:01:41.744212 1656802 system_pods.go:89] "storage-provisioner" [9639e9e0-73e8-48ed-a25a-603c687470cd] Running
	I1119 03:01:41.744249 1656802 system_pods.go:126] duration metric: took 5.048931ms to wait for k8s-apps to be running ...
	I1119 03:01:41.744275 1656802 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 03:01:41.744359 1656802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:01:41.768797 1656802 system_svc.go:56] duration metric: took 24.51344ms WaitForService to wait for kubelet
	I1119 03:01:41.768872 1656802 kubeadm.go:587] duration metric: took 8.401245822s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:01:41.768909 1656802 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:01:41.774044 1656802 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:01:41.774127 1656802 node_conditions.go:123] node cpu capacity is 2
	I1119 03:01:41.774154 1656802 node_conditions.go:105] duration metric: took 5.225729ms to run NodePressure ...
	I1119 03:01:41.774178 1656802 start.go:242] waiting for startup goroutines ...
	I1119 03:01:41.774213 1656802 start.go:247] waiting for cluster config update ...
	I1119 03:01:41.774243 1656802 start.go:256] writing updated cluster config ...
	I1119 03:01:41.774598 1656802 ssh_runner.go:195] Run: rm -f paused
	I1119 03:01:41.780165 1656802 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:01:41.783949 1656802 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pkngt" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 03:01:43.790163 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	I1119 03:01:43.179717 1658016 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:01:43.179746 1658016 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:01:43.179813 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:43.219196 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:43.233795 1658016 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:01:43.233815 1658016 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:01:43.233888 1658016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:01:43.235409 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:43.263024 1658016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:01:43.443517 1658016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:01:43.497238 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 03:01:43.497304 1658016 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 03:01:43.537813 1658016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:01:43.566248 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 03:01:43.566313 1658016 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 03:01:43.580461 1658016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:01:43.616642 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 03:01:43.616707 1658016 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 03:01:43.696205 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 03:01:43.696269 1658016 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 03:01:43.758248 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 03:01:43.758321 1658016 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 03:01:43.803898 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 03:01:43.803973 1658016 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 03:01:43.857915 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 03:01:43.857988 1658016 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 03:01:43.887401 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 03:01:43.887473 1658016 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 03:01:43.919156 1658016 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:01:43.919230 1658016 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 03:01:43.955362 1658016 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1119 03:01:45.792410 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:01:47.793974 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	I1119 03:01:53.061494 1658016 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.617935153s)
	I1119 03:01:53.061575 1658016 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.523699557s)
	I1119 03:01:53.061609 1658016 node_ready.go:35] waiting up to 6m0s for node "embed-certs-592123" to be "Ready" ...
	I1119 03:01:53.061903 1658016 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.481378979s)
	I1119 03:01:53.062149 1658016 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.106708624s)
	I1119 03:01:53.065575 1658016 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-592123 addons enable metrics-server
	
	I1119 03:01:53.109051 1658016 node_ready.go:49] node "embed-certs-592123" is "Ready"
	I1119 03:01:53.109130 1658016 node_ready.go:38] duration metric: took 47.507974ms for node "embed-certs-592123" to be "Ready" ...
	I1119 03:01:53.109158 1658016 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:01:53.109245 1658016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:01:53.143045 1658016 api_server.go:72] duration metric: took 10.034041016s to wait for apiserver process to appear ...
	I1119 03:01:53.143073 1658016 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:01:53.143092 1658016 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 03:01:53.150888 1658016 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1119 03:01:50.294379 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:01:52.803744 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	I1119 03:01:53.153808 1658016 addons.go:515] duration metric: took 10.044421191s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 03:01:53.173080 1658016 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 03:01:53.173106 1658016 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 03:01:53.643397 1658016 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 03:01:53.654965 1658016 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 03:01:53.656168 1658016 api_server.go:141] control plane version: v1.34.1
	I1119 03:01:53.656188 1658016 api_server.go:131] duration metric: took 513.10786ms to wait for apiserver health ...
	I1119 03:01:53.656197 1658016 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:01:53.660021 1658016 system_pods.go:59] 8 kube-system pods found
	I1119 03:01:53.660059 1658016 system_pods.go:61] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:53.660078 1658016 system_pods.go:61] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:01:53.660085 1658016 system_pods.go:61] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:53.660090 1658016 system_pods.go:61] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:53.660105 1658016 system_pods.go:61] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 03:01:53.660117 1658016 system_pods.go:61] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:53.660123 1658016 system_pods.go:61] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:01:53.660128 1658016 system_pods.go:61] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Running
	I1119 03:01:53.660139 1658016 system_pods.go:74] duration metric: took 3.935961ms to wait for pod list to return data ...
	I1119 03:01:53.660147 1658016 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:01:53.663078 1658016 default_sa.go:45] found service account: "default"
	I1119 03:01:53.663104 1658016 default_sa.go:55] duration metric: took 2.951424ms for default service account to be created ...
	I1119 03:01:53.663113 1658016 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 03:01:53.666546 1658016 system_pods.go:86] 8 kube-system pods found
	I1119 03:01:53.666578 1658016 system_pods.go:89] "coredns-66bc5c9577-vtc44" [5e3bd982-5dec-4b41-97a5-feea8996184f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:01:53.666587 1658016 system_pods.go:89] "etcd-embed-certs-592123" [7a5b129c-3716-4d23-8c43-28d58936c458] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:01:53.666592 1658016 system_pods.go:89] "kindnet-sv99p" [30531f66-1993-4675-a8a7-c88fbd84c7e0] Running
	I1119 03:01:53.666597 1658016 system_pods.go:89] "kube-apiserver-embed-certs-592123" [a890bda5-d7b3-4776-9e06-d9323deea3d5] Running
	I1119 03:01:53.666604 1658016 system_pods.go:89] "kube-controller-manager-embed-certs-592123" [b5eadc5e-a4d2-45fb-ac21-8c466ec953fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 03:01:53.666612 1658016 system_pods.go:89] "kube-proxy-55pcf" [5d001372-9066-4ffc-a2f5-1f51e988cb2a] Running
	I1119 03:01:53.666619 1658016 system_pods.go:89] "kube-scheduler-embed-certs-592123" [d216d9cd-538e-4206-b0cf-37d7c5e8d4a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:01:53.666630 1658016 system_pods.go:89] "storage-provisioner" [34c0ebbf-6c58-4d0b-94de-dbfcf04b254d] Running
	I1119 03:01:53.666638 1658016 system_pods.go:126] duration metric: took 3.519218ms to wait for k8s-apps to be running ...
	I1119 03:01:53.666652 1658016 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 03:01:53.666716 1658016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:01:53.697982 1658016 system_svc.go:56] duration metric: took 31.319686ms WaitForService to wait for kubelet
	I1119 03:01:53.698012 1658016 kubeadm.go:587] duration metric: took 10.589014492s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:01:53.698030 1658016 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:01:53.703692 1658016 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:01:53.703727 1658016 node_conditions.go:123] node cpu capacity is 2
	I1119 03:01:53.703743 1658016 node_conditions.go:105] duration metric: took 5.704961ms to run NodePressure ...
	I1119 03:01:53.703757 1658016 start.go:242] waiting for startup goroutines ...
	I1119 03:01:53.703764 1658016 start.go:247] waiting for cluster config update ...
	I1119 03:01:53.703778 1658016 start.go:256] writing updated cluster config ...
	I1119 03:01:53.704062 1658016 ssh_runner.go:195] Run: rm -f paused
	I1119 03:01:53.708700 1658016 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:01:53.717445 1658016 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vtc44" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 03:01:55.726256 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:01:55.288922 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:01:57.292291 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:01:59.292945 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:01:58.223419 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:00.278475 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:01.790375 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:02:04.289904 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:02:02.723334 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:05.224627 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:06.793192 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:02:09.289049 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	W1119 03:02:07.233415 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:09.723045 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:11.290055 1656802 pod_ready.go:104] pod "coredns-66bc5c9577-pkngt" is not "Ready", error: <nil>
	I1119 03:02:12.790427 1656802 pod_ready.go:94] pod "coredns-66bc5c9577-pkngt" is "Ready"
	I1119 03:02:12.790507 1656802 pod_ready.go:86] duration metric: took 31.006488894s for pod "coredns-66bc5c9577-pkngt" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:12.793312 1656802 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:12.797770 1656802 pod_ready.go:94] pod "etcd-default-k8s-diff-port-579203" is "Ready"
	I1119 03:02:12.797796 1656802 pod_ready.go:86] duration metric: took 4.458802ms for pod "etcd-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:12.800142 1656802 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:12.804674 1656802 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-579203" is "Ready"
	I1119 03:02:12.804715 1656802 pod_ready.go:86] duration metric: took 4.550434ms for pod "kube-apiserver-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:12.807016 1656802 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:12.988477 1656802 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-579203" is "Ready"
	I1119 03:02:12.988513 1656802 pod_ready.go:86] duration metric: took 181.4741ms for pod "kube-controller-manager-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:13.188436 1656802 pod_ready.go:83] waiting for pod "kube-proxy-7ncfq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:13.588469 1656802 pod_ready.go:94] pod "kube-proxy-7ncfq" is "Ready"
	I1119 03:02:13.588497 1656802 pod_ready.go:86] duration metric: took 400.032955ms for pod "kube-proxy-7ncfq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:13.788515 1656802 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:14.188702 1656802 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-579203" is "Ready"
	I1119 03:02:14.188782 1656802 pod_ready.go:86] duration metric: took 400.239275ms for pod "kube-scheduler-default-k8s-diff-port-579203" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:14.188820 1656802 pod_ready.go:40] duration metric: took 32.40858096s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:02:14.253571 1656802 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:02:14.256753 1656802 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-579203" cluster and "default" namespace by default
	W1119 03:02:12.224015 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:14.723585 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:17.223113 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:19.223468 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:21.722580 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	W1119 03:02:24.222733 1658016 pod_ready.go:104] pod "coredns-66bc5c9577-vtc44" is not "Ready", error: <nil>
	I1119 03:02:25.223317 1658016 pod_ready.go:94] pod "coredns-66bc5c9577-vtc44" is "Ready"
	I1119 03:02:25.223345 1658016 pod_ready.go:86] duration metric: took 31.505871824s for pod "coredns-66bc5c9577-vtc44" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.226268 1658016 pod_ready.go:83] waiting for pod "etcd-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.230852 1658016 pod_ready.go:94] pod "etcd-embed-certs-592123" is "Ready"
	I1119 03:02:25.230882 1658016 pod_ready.go:86] duration metric: took 4.588546ms for pod "etcd-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.232932 1658016 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.237209 1658016 pod_ready.go:94] pod "kube-apiserver-embed-certs-592123" is "Ready"
	I1119 03:02:25.237237 1658016 pod_ready.go:86] duration metric: took 4.279468ms for pod "kube-apiserver-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.239472 1658016 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.420523 1658016 pod_ready.go:94] pod "kube-controller-manager-embed-certs-592123" is "Ready"
	I1119 03:02:25.420555 1658016 pod_ready.go:86] duration metric: took 181.058406ms for pod "kube-controller-manager-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:25.620523 1658016 pod_ready.go:83] waiting for pod "kube-proxy-55pcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:26.020653 1658016 pod_ready.go:94] pod "kube-proxy-55pcf" is "Ready"
	I1119 03:02:26.020686 1658016 pod_ready.go:86] duration metric: took 400.085735ms for pod "kube-proxy-55pcf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:26.220857 1658016 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:26.620682 1658016 pod_ready.go:94] pod "kube-scheduler-embed-certs-592123" is "Ready"
	I1119 03:02:26.620708 1658016 pod_ready.go:86] duration metric: took 399.828135ms for pod "kube-scheduler-embed-certs-592123" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:02:26.620721 1658016 pod_ready.go:40] duration metric: took 32.911988432s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:02:26.700063 1658016 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:02:26.702951 1658016 out.go:179] * Done! kubectl is now configured to use "embed-certs-592123" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.759383975Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2debabb9-a8a0-4b47-8a76-cc52393d25d9 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.761498376Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=db1f30e8-c83d-4621-aa93-c6914ac0d1db name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.761647984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.766319006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.766489618Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/aa7c0ab4d079cebd26944c1c9e516c10e8dcc2744ad452b0cd56814f74ae1daa/merged/etc/passwd: no such file or directory"
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.766511656Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/aa7c0ab4d079cebd26944c1c9e516c10e8dcc2744ad452b0cd56814f74ae1daa/merged/etc/group: no such file or directory"
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.766775698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.792562874Z" level=info msg="Created container dc2899265d6b0abb8bb121734cf5340b1c1e2eaaeacba61cd625f0fd5849b46a: kube-system/storage-provisioner/storage-provisioner" id=db1f30e8-c83d-4621-aa93-c6914ac0d1db name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.793788562Z" level=info msg="Starting container: dc2899265d6b0abb8bb121734cf5340b1c1e2eaaeacba61cd625f0fd5849b46a" id=9d6a693d-9217-41ee-b94f-345cb4b36715 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:02:11 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:11.795897736Z" level=info msg="Started container" PID=1638 containerID=dc2899265d6b0abb8bb121734cf5340b1c1e2eaaeacba61cd625f0fd5849b46a description=kube-system/storage-provisioner/storage-provisioner id=9d6a693d-9217-41ee-b94f-345cb4b36715 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8c3902df90b2da5835f0282101762661e41a1a9efecfed6306176699b6b59b8
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.975762625Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.983586188Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.983622568Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.983643835Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.986734414Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.986765059Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.986787425Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.989780086Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.989811281Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.989833853Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.992733979Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.992766101Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.992790995Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.996511963Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:20 default-k8s-diff-port-579203 crio[653]: time="2025-11-19T03:02:20.996544898Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	dc2899265d6b0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   e8c3902df90b2       storage-provisioner                                    kube-system
	0b0a1ea8af8be       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   d9d09765bae39       dashboard-metrics-scraper-6ffb444bf9-57qxx             kubernetes-dashboard
	60ef01ce19f59       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   6072879a2f315       kubernetes-dashboard-855c9754f9-7sz62                  kubernetes-dashboard
	cc12ce0d09f2f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   9aa550c464849       kindnet-bt849                                          kube-system
	36dc12556790e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   e713f2d887381       coredns-66bc5c9577-pkngt                               kube-system
	39e0b2fc4572e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   6815d86beab2a       busybox                                                default
	3ea7de269e8e6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   c8bb45b09c734       kube-proxy-7ncfq                                       kube-system
	717bbd5246f66       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   e8c3902df90b2       storage-provisioner                                    kube-system
	4516831cebdb2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   d59b243572465       etcd-default-k8s-diff-port-579203                      kube-system
	34a04e8a92683       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   111d547c1a8a6       kube-apiserver-default-k8s-diff-port-579203            kube-system
	3803cdc1a2993       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   2934feb29a5c7       kube-controller-manager-default-k8s-diff-port-579203   kube-system
	1f1f933b71826       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   acefce472ced4       kube-scheduler-default-k8s-diff-port-579203            kube-system
	
	
	==> coredns [36dc12556790ec62ebafc51adfeddf981db6efc365694b45844fc58332452d44] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46725 - 14385 "HINFO IN 5227044846904803637.3225010015740555538. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056835043s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-579203
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-579203
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=default-k8s-diff-port-579203
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T03_00_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 03:00:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-579203
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 03:02:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 03:02:10 +0000   Wed, 19 Nov 2025 03:00:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 03:02:10 +0000   Wed, 19 Nov 2025 03:00:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 03:02:10 +0000   Wed, 19 Nov 2025 03:00:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 03:02:10 +0000   Wed, 19 Nov 2025 03:00:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-579203
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                7a64d282-4275-4f3a-a03c-1a14359e0c92
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-pkngt                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m16s
	  kube-system                 etcd-default-k8s-diff-port-579203                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m20s
	  kube-system                 kindnet-bt849                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-default-k8s-diff-port-579203             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-579203    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-7ncfq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-default-k8s-diff-port-579203             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-57qxx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7sz62                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m14s                  kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m20s                  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s                  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m20s                  kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m17s                  node-controller  Node default-k8s-diff-port-579203 event: Registered Node default-k8s-diff-port-579203 in Controller
	  Normal   NodeReady                95s                    kubelet          Node default-k8s-diff-port-579203 status is now: NodeReady
	  Normal   Starting                 60s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-579203 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node default-k8s-diff-port-579203 event: Registered Node default-k8s-diff-port-579203 in Controller
	
	
	==> dmesg <==
	[Nov19 02:38] overlayfs: idmapped layers are currently not supported
	[Nov19 02:39] overlayfs: idmapped layers are currently not supported
	[Nov19 02:41] overlayfs: idmapped layers are currently not supported
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	[Nov19 03:00] overlayfs: idmapped layers are currently not supported
	[  +8.385032] overlayfs: idmapped layers are currently not supported
	[Nov19 03:01] overlayfs: idmapped layers are currently not supported
	[  +9.842210] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4516831cebdb2595b82a89b6272a4678df2d23122cf9ae52b8b5ae44bd439756] <==
	{"level":"warn","ts":"2025-11-19T03:01:35.476184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.484382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.534526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.546960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.579169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.606455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.639518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.656908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.668989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.726169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.785347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.820170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.870606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.915578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.949603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:35.975691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.024343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.046104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.142893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.153691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.249788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.286192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.323909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.352050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:36.529800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33124","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:02:31 up 10:44,  0 user,  load average: 3.76, 3.42, 2.75
	Linux default-k8s-diff-port-579203 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cc12ce0d09f2f1fda420bf8fe3582af2e4d897fbce86ad179d3548f3c7dd46f7] <==
	I1119 03:01:40.792340       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 03:01:40.796228       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 03:01:40.805891       1 main.go:148] setting mtu 1500 for CNI 
	I1119 03:01:40.805911       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 03:01:40.805923       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T03:01:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 03:01:40.972388       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 03:01:40.972406       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 03:01:40.972414       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 03:01:40.972685       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 03:02:10.973162       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 03:02:10.973274       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 03:02:10.973315       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 03:02:10.973362       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 03:02:12.472861       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 03:02:12.472932       1 metrics.go:72] Registering metrics
	I1119 03:02:12.473039       1 controller.go:711] "Syncing nftables rules"
	I1119 03:02:20.975397       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 03:02:20.975474       1 main.go:301] handling current node
	I1119 03:02:30.979949       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 03:02:30.979982       1 main.go:301] handling current node
	
	
	==> kube-apiserver [34a04e8a9268354f0b56354ac57651328f516cc508f9fa0c077c3b4d4336b5ac] <==
	I1119 03:01:39.200622       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 03:01:39.200629       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 03:01:39.210295       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 03:01:39.210342       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 03:01:39.210869       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 03:01:39.210913       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 03:01:39.249053       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 03:01:39.255934       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 03:01:39.286529       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 03:01:39.273060       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 03:01:39.306017       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 03:01:39.373382       1 cache.go:39] Caches are synced for autoregister controller
	I1119 03:01:39.415441       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 03:01:39.416053       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 03:01:39.484965       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1119 03:01:39.544584       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 03:01:41.013235       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 03:01:41.245627       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 03:01:41.395640       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 03:01:41.444487       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 03:01:41.614555       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.72.193"}
	I1119 03:01:41.653126       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.132.57"}
	I1119 03:01:44.064236       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 03:01:44.402333       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 03:01:44.499003       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3803cdc1a2993683debdee19a3b01fb09e7c32a9d12eb84a9436d969662cea8a] <==
	I1119 03:01:43.998233       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 03:01:43.998263       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 03:01:44.000493       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 03:01:44.001716       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 03:01:44.003416       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 03:01:44.003542       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:01:44.009619       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 03:01:44.009727       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 03:01:44.013756       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 03:01:44.014815       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 03:01:44.019901       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 03:01:44.021750       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 03:01:44.021907       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 03:01:44.021966       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 03:01:44.022036       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 03:01:44.028322       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 03:01:44.030671       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 03:01:44.049583       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 03:01:44.053562       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 03:01:44.057463       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 03:01:44.065914       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 03:01:44.066062       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 03:01:44.081612       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:01:44.410306       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1119 03:01:44.412642       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [3ea7de269e8e6d7b9b64192a351808f1a03a33517868461ef84dc108d46883a5] <==
	I1119 03:01:41.800409       1 server_linux.go:53] "Using iptables proxy"
	I1119 03:01:41.979556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 03:01:42.093319       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 03:01:42.093372       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 03:01:42.093473       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 03:01:42.561909       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 03:01:42.562036       1 server_linux.go:132] "Using iptables Proxier"
	I1119 03:01:42.692526       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 03:01:42.692942       1 server.go:527] "Version info" version="v1.34.1"
	I1119 03:01:42.693210       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:01:42.706321       1 config.go:200] "Starting service config controller"
	I1119 03:01:42.706389       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 03:01:42.706441       1 config.go:106] "Starting endpoint slice config controller"
	I1119 03:01:42.706467       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 03:01:42.731566       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 03:01:42.732447       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 03:01:42.733244       1 config.go:309] "Starting node config controller"
	I1119 03:01:42.739690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 03:01:42.739762       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 03:01:42.807183       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 03:01:42.833452       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 03:01:42.844245       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1f1f933b7182604f83325a95fc3ff39e0799211227f9d528ab807a128acc0a96] <==
	I1119 03:01:40.724127       1 serving.go:386] Generated self-signed cert in-memory
	I1119 03:01:42.521071       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 03:01:42.521162       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:01:42.529412       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 03:01:42.531887       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:01:42.538129       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:01:42.531844       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 03:01:42.538158       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 03:01:42.531902       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:01:42.538936       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:01:42.531918       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 03:01:42.641305       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:01:42.646593       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 03:01:42.651200       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 19 03:01:44 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:44.448819     783 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aea40428-2b0c-4c57-8708-f8b56e473799-kube-api-access-hs999 podName:aea40428-2b0c-4c57-8708-f8b56e473799 nodeName:}" failed. No retries permitted until 2025-11-19 03:01:44.948793285 +0000 UTC m=+13.087466182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hs999" (UniqueName: "kubernetes.io/projected/aea40428-2b0c-4c57-8708-f8b56e473799-kube-api-access-hs999") pod "dashboard-metrics-scraper-6ffb444bf9-57qxx" (UID: "aea40428-2b0c-4c57-8708-f8b56e473799") : configmap "kube-root-ca.crt" not found
	Nov 19 03:01:44 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:44.452415     783 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 03:01:44 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:44.452455     783 projected.go:196] Error preparing data for projected volume kube-api-access-22bsf for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7sz62: configmap "kube-root-ca.crt" not found
	Nov 19 03:01:44 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:44.452514     783 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e8cb514-f8db-4efe-8e51-f6de4fd4b53f-kube-api-access-22bsf podName:2e8cb514-f8db-4efe-8e51-f6de4fd4b53f nodeName:}" failed. No retries permitted until 2025-11-19 03:01:44.952496728 +0000 UTC m=+13.091169633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-22bsf" (UniqueName: "kubernetes.io/projected/2e8cb514-f8db-4efe-8e51-f6de4fd4b53f-kube-api-access-22bsf") pod "kubernetes-dashboard-855c9754f9-7sz62" (UID: "2e8cb514-f8db-4efe-8e51-f6de4fd4b53f") : configmap "kube-root-ca.crt" not found
	Nov 19 03:01:45 default-k8s-diff-port-579203 kubelet[783]: W1119 03:01:45.225677     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/crio-d9d09765bae3989b896848db8ac82025fb98160d71a54dfca747a533e9a092da WatchSource:0}: Error finding container d9d09765bae3989b896848db8ac82025fb98160d71a54dfca747a533e9a092da: Status 404 returned error can't find the container with id d9d09765bae3989b896848db8ac82025fb98160d71a54dfca747a533e9a092da
	Nov 19 03:01:45 default-k8s-diff-port-579203 kubelet[783]: W1119 03:01:45.276219     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d6ecbc325578cb5699745b1df1421f02473ccdb3a25858c2b3fc6ef55c70faf5/crio-6072879a2f315e3c62a808fd67e562ca0ab823a5bccf603f1f3aa7466f3ecc54 WatchSource:0}: Error finding container 6072879a2f315e3c62a808fd67e562ca0ab823a5bccf603f1f3aa7466f3ecc54: Status 404 returned error can't find the container with id 6072879a2f315e3c62a808fd67e562ca0ab823a5bccf603f1f3aa7466f3ecc54
	Nov 19 03:01:52 default-k8s-diff-port-579203 kubelet[783]: I1119 03:01:52.688502     783 scope.go:117] "RemoveContainer" containerID="1bff5cfaecfb7dcf74d354b0ea4c8d3fed138fe0b53c189b453157ac5a6c737a"
	Nov 19 03:01:53 default-k8s-diff-port-579203 kubelet[783]: I1119 03:01:53.693880     783 scope.go:117] "RemoveContainer" containerID="1bff5cfaecfb7dcf74d354b0ea4c8d3fed138fe0b53c189b453157ac5a6c737a"
	Nov 19 03:01:53 default-k8s-diff-port-579203 kubelet[783]: I1119 03:01:53.694169     783 scope.go:117] "RemoveContainer" containerID="91fbb59ec8bd1d126fdd3e4692f1d3c63209341307ac59801f125d3fc0243304"
	Nov 19 03:01:53 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:53.694323     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57qxx_kubernetes-dashboard(aea40428-2b0c-4c57-8708-f8b56e473799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57qxx" podUID="aea40428-2b0c-4c57-8708-f8b56e473799"
	Nov 19 03:01:54 default-k8s-diff-port-579203 kubelet[783]: I1119 03:01:54.698591     783 scope.go:117] "RemoveContainer" containerID="91fbb59ec8bd1d126fdd3e4692f1d3c63209341307ac59801f125d3fc0243304"
	Nov 19 03:01:54 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:54.698754     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57qxx_kubernetes-dashboard(aea40428-2b0c-4c57-8708-f8b56e473799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57qxx" podUID="aea40428-2b0c-4c57-8708-f8b56e473799"
	Nov 19 03:01:55 default-k8s-diff-port-579203 kubelet[783]: I1119 03:01:55.701200     783 scope.go:117] "RemoveContainer" containerID="91fbb59ec8bd1d126fdd3e4692f1d3c63209341307ac59801f125d3fc0243304"
	Nov 19 03:01:55 default-k8s-diff-port-579203 kubelet[783]: E1119 03:01:55.701357     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57qxx_kubernetes-dashboard(aea40428-2b0c-4c57-8708-f8b56e473799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57qxx" podUID="aea40428-2b0c-4c57-8708-f8b56e473799"
	Nov 19 03:02:08 default-k8s-diff-port-579203 kubelet[783]: I1119 03:02:08.147203     783 scope.go:117] "RemoveContainer" containerID="91fbb59ec8bd1d126fdd3e4692f1d3c63209341307ac59801f125d3fc0243304"
	Nov 19 03:02:08 default-k8s-diff-port-579203 kubelet[783]: I1119 03:02:08.747198     783 scope.go:117] "RemoveContainer" containerID="91fbb59ec8bd1d126fdd3e4692f1d3c63209341307ac59801f125d3fc0243304"
	Nov 19 03:02:08 default-k8s-diff-port-579203 kubelet[783]: I1119 03:02:08.747526     783 scope.go:117] "RemoveContainer" containerID="0b0a1ea8af8bea488a5668e7548a35e9a9b3f133dd62d0b5eec95839647aa3a9"
	Nov 19 03:02:08 default-k8s-diff-port-579203 kubelet[783]: E1119 03:02:08.747778     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57qxx_kubernetes-dashboard(aea40428-2b0c-4c57-8708-f8b56e473799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57qxx" podUID="aea40428-2b0c-4c57-8708-f8b56e473799"
	Nov 19 03:02:08 default-k8s-diff-port-579203 kubelet[783]: I1119 03:02:08.769784     783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7sz62" podStartSLOduration=10.080701345 podStartE2EDuration="24.768365457s" podCreationTimestamp="2025-11-19 03:01:44 +0000 UTC" firstStartedPulling="2025-11-19 03:01:45.284943555 +0000 UTC m=+13.423616452" lastFinishedPulling="2025-11-19 03:01:59.972607667 +0000 UTC m=+28.111280564" observedRunningTime="2025-11-19 03:02:00.745301926 +0000 UTC m=+28.883974831" watchObservedRunningTime="2025-11-19 03:02:08.768365457 +0000 UTC m=+36.907038363"
	Nov 19 03:02:11 default-k8s-diff-port-579203 kubelet[783]: I1119 03:02:11.757603     783 scope.go:117] "RemoveContainer" containerID="717bbd5246f66b2cc923d8f5ba5038836144f1b64ec4fff2f37f5caf1afef446"
	Nov 19 03:02:15 default-k8s-diff-port-579203 kubelet[783]: I1119 03:02:15.143103     783 scope.go:117] "RemoveContainer" containerID="0b0a1ea8af8bea488a5668e7548a35e9a9b3f133dd62d0b5eec95839647aa3a9"
	Nov 19 03:02:15 default-k8s-diff-port-579203 kubelet[783]: E1119 03:02:15.143310     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-57qxx_kubernetes-dashboard(aea40428-2b0c-4c57-8708-f8b56e473799)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-57qxx" podUID="aea40428-2b0c-4c57-8708-f8b56e473799"
	Nov 19 03:02:26 default-k8s-diff-port-579203 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 03:02:26 default-k8s-diff-port-579203 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 03:02:26 default-k8s-diff-port-579203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [60ef01ce19f59b14a39c9d03bdda2fb6b702a2cd8a2bdca3dce9e879e6a33576] <==
	2025/11/19 03:02:00 Starting overwatch
	2025/11/19 03:02:00 Using namespace: kubernetes-dashboard
	2025/11/19 03:02:00 Using in-cluster config to connect to apiserver
	2025/11/19 03:02:00 Using secret token for csrf signing
	2025/11/19 03:02:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 03:02:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 03:02:00 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 03:02:00 Generating JWE encryption key
	2025/11/19 03:02:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 03:02:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 03:02:00 Initializing JWE encryption key from synchronized object
	2025/11/19 03:02:00 Creating in-cluster Sidecar client
	2025/11/19 03:02:00 Serving insecurely on HTTP port: 9090
	2025/11/19 03:02:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 03:02:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [717bbd5246f66b2cc923d8f5ba5038836144f1b64ec4fff2f37f5caf1afef446] <==
	I1119 03:01:40.905705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 03:02:10.916520       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [dc2899265d6b0abb8bb121734cf5340b1c1e2eaaeacba61cd625f0fd5849b46a] <==
	I1119 03:02:11.811246       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 03:02:11.830001       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 03:02:11.830146       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 03:02:11.833226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:15.288162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:19.548434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:23.147828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:26.201324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:29.223053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:29.229633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:02:29.229791       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 03:02:29.230478       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9af3f9a5-889b-4042-b73a-79c73b0a4e8f", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-579203_a6ff1c61-7f89-4f75-a41a-30366bf4b2e0 became leader
	I1119 03:02:29.230507       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-579203_a6ff1c61-7f89-4f75-a41a-30366bf4b2e0!
	W1119 03:02:29.234967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:29.247717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:02:29.331021       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-579203_a6ff1c61-7f89-4f75-a41a-30366bf4b2e0!
	W1119 03:02:31.251254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:31.256576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-579203 -n default-k8s-diff-port-579203
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-579203 -n default-k8s-diff-port-579203: exit status 2 (380.822015ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-579203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-592123 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-592123 --alsologtostderr -v=1: exit status 80 (2.107280794s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-592123 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 03:02:38.477445 1663468 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:02:38.477585 1663468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:02:38.477597 1663468 out.go:374] Setting ErrFile to fd 2...
	I1119 03:02:38.477602 1663468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:02:38.477868 1663468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:02:38.478140 1663468 out.go:368] Setting JSON to false
	I1119 03:02:38.478167 1663468 mustload.go:66] Loading cluster: embed-certs-592123
	I1119 03:02:38.478556 1663468 config.go:182] Loaded profile config "embed-certs-592123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:02:38.479021 1663468 cli_runner.go:164] Run: docker container inspect embed-certs-592123 --format={{.State.Status}}
	I1119 03:02:38.510255 1663468 host.go:66] Checking if "embed-certs-592123" exists ...
	I1119 03:02:38.510577 1663468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:02:38.642354 1663468 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:71 SystemTime:2025-11-19 03:02:38.626151366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:02:38.642996 1663468 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-592123 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 03:02:38.648664 1663468 out.go:179] * Pausing node embed-certs-592123 ... 
	I1119 03:02:38.654470 1663468 host.go:66] Checking if "embed-certs-592123" exists ...
	I1119 03:02:38.654982 1663468 ssh_runner.go:195] Run: systemctl --version
	I1119 03:02:38.655034 1663468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-592123
	I1119 03:02:38.685371 1663468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34920 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/embed-certs-592123/id_rsa Username:docker}
	I1119 03:02:38.801408 1663468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:02:38.836308 1663468 pause.go:52] kubelet running: true
	I1119 03:02:38.836375 1663468 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 03:02:39.196841 1663468 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 03:02:39.196933 1663468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 03:02:39.283121 1663468 cri.go:89] found id: "d75fb98c294954a086d7ac0cd21c45155c83c760cc142791e7d4eca8043ba541"
	I1119 03:02:39.283147 1663468 cri.go:89] found id: "00cd4bbd35ff9b6d4771e35883c59aeba78d227be099a95a6bb86d479cf45616"
	I1119 03:02:39.283151 1663468 cri.go:89] found id: "693b87a40338d1ed91a893430753efcc324f88bb8889e2774deb45e612737e46"
	I1119 03:02:39.283155 1663468 cri.go:89] found id: "de8510aae91ebe4f52e6549cedefec1262166611b28429518e2b4db5fffb05e5"
	I1119 03:02:39.283158 1663468 cri.go:89] found id: "c25eeb92b97431c67651c962c7e0e3bc4fdf21f9c78ccc749a241b682fb1ee70"
	I1119 03:02:39.283162 1663468 cri.go:89] found id: "28baf9cda670ab54ffce2ff7181d4841299d3d55c51eab8df2a52c1c366a4111"
	I1119 03:02:39.283165 1663468 cri.go:89] found id: "44051fa115dbdefd2547da0097f35a9d487cbcc9b4becc2a70f91a77a0d1da21"
	I1119 03:02:39.283179 1663468 cri.go:89] found id: "0c30389a4661b622b8e4e66ed3373832cf9f4abe199dc1ec782692aa5b76a699"
	I1119 03:02:39.283183 1663468 cri.go:89] found id: "50a2bdb9c67513a1526c7008d09101b3db95d7bac468c5e2f2f7dcda041de7b5"
	I1119 03:02:39.283189 1663468 cri.go:89] found id: "9ec1e7c5c349fe1b082432751f82acffb163fa8de01a3b8cbd8e6a9956820502"
	I1119 03:02:39.283196 1663468 cri.go:89] found id: "b107b252e23438e4e91346429aa292ef99768ea80e02ebc445b2a52c2f401d41"
	I1119 03:02:39.283199 1663468 cri.go:89] found id: ""
	I1119 03:02:39.283249 1663468 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 03:02:39.297417 1663468 retry.go:31] will retry after 313.8673ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:02:39Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:02:39.611980 1663468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:02:39.624994 1663468 pause.go:52] kubelet running: false
	I1119 03:02:39.625062 1663468 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 03:02:39.796061 1663468 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 03:02:39.796149 1663468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 03:02:39.871161 1663468 cri.go:89] found id: "d75fb98c294954a086d7ac0cd21c45155c83c760cc142791e7d4eca8043ba541"
	I1119 03:02:39.871184 1663468 cri.go:89] found id: "00cd4bbd35ff9b6d4771e35883c59aeba78d227be099a95a6bb86d479cf45616"
	I1119 03:02:39.871189 1663468 cri.go:89] found id: "693b87a40338d1ed91a893430753efcc324f88bb8889e2774deb45e612737e46"
	I1119 03:02:39.871193 1663468 cri.go:89] found id: "de8510aae91ebe4f52e6549cedefec1262166611b28429518e2b4db5fffb05e5"
	I1119 03:02:39.871196 1663468 cri.go:89] found id: "c25eeb92b97431c67651c962c7e0e3bc4fdf21f9c78ccc749a241b682fb1ee70"
	I1119 03:02:39.871200 1663468 cri.go:89] found id: "28baf9cda670ab54ffce2ff7181d4841299d3d55c51eab8df2a52c1c366a4111"
	I1119 03:02:39.871204 1663468 cri.go:89] found id: "44051fa115dbdefd2547da0097f35a9d487cbcc9b4becc2a70f91a77a0d1da21"
	I1119 03:02:39.871207 1663468 cri.go:89] found id: "0c30389a4661b622b8e4e66ed3373832cf9f4abe199dc1ec782692aa5b76a699"
	I1119 03:02:39.871210 1663468 cri.go:89] found id: "50a2bdb9c67513a1526c7008d09101b3db95d7bac468c5e2f2f7dcda041de7b5"
	I1119 03:02:39.871219 1663468 cri.go:89] found id: "9ec1e7c5c349fe1b082432751f82acffb163fa8de01a3b8cbd8e6a9956820502"
	I1119 03:02:39.871223 1663468 cri.go:89] found id: "b107b252e23438e4e91346429aa292ef99768ea80e02ebc445b2a52c2f401d41"
	I1119 03:02:39.871227 1663468 cri.go:89] found id: ""
	I1119 03:02:39.871283 1663468 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 03:02:39.882046 1663468 retry.go:31] will retry after 367.791064ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:02:39Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:02:40.250314 1663468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:02:40.263996 1663468 pause.go:52] kubelet running: false
	I1119 03:02:40.264071 1663468 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 03:02:40.431691 1663468 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 03:02:40.431778 1663468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 03:02:40.499679 1663468 cri.go:89] found id: "d75fb98c294954a086d7ac0cd21c45155c83c760cc142791e7d4eca8043ba541"
	I1119 03:02:40.499702 1663468 cri.go:89] found id: "00cd4bbd35ff9b6d4771e35883c59aeba78d227be099a95a6bb86d479cf45616"
	I1119 03:02:40.499709 1663468 cri.go:89] found id: "693b87a40338d1ed91a893430753efcc324f88bb8889e2774deb45e612737e46"
	I1119 03:02:40.499712 1663468 cri.go:89] found id: "de8510aae91ebe4f52e6549cedefec1262166611b28429518e2b4db5fffb05e5"
	I1119 03:02:40.499716 1663468 cri.go:89] found id: "c25eeb92b97431c67651c962c7e0e3bc4fdf21f9c78ccc749a241b682fb1ee70"
	I1119 03:02:40.499723 1663468 cri.go:89] found id: "28baf9cda670ab54ffce2ff7181d4841299d3d55c51eab8df2a52c1c366a4111"
	I1119 03:02:40.499726 1663468 cri.go:89] found id: "44051fa115dbdefd2547da0097f35a9d487cbcc9b4becc2a70f91a77a0d1da21"
	I1119 03:02:40.499729 1663468 cri.go:89] found id: "0c30389a4661b622b8e4e66ed3373832cf9f4abe199dc1ec782692aa5b76a699"
	I1119 03:02:40.499733 1663468 cri.go:89] found id: "50a2bdb9c67513a1526c7008d09101b3db95d7bac468c5e2f2f7dcda041de7b5"
	I1119 03:02:40.499741 1663468 cri.go:89] found id: "9ec1e7c5c349fe1b082432751f82acffb163fa8de01a3b8cbd8e6a9956820502"
	I1119 03:02:40.499744 1663468 cri.go:89] found id: "b107b252e23438e4e91346429aa292ef99768ea80e02ebc445b2a52c2f401d41"
	I1119 03:02:40.499747 1663468 cri.go:89] found id: ""
	I1119 03:02:40.499808 1663468 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 03:02:40.515291 1663468 out.go:203] 
	W1119 03:02:40.518223 1663468 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:02:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:02:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 03:02:40.518247 1663468 out.go:285] * 
	* 
	W1119 03:02:40.527720 1663468 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 03:02:40.530680 1663468 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-592123 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-592123
helpers_test.go:243: (dbg) docker inspect embed-certs-592123:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e",
	        "Created": "2025-11-19T02:59:47.671670147Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1658247,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T03:01:32.115303141Z",
	            "FinishedAt": "2025-11-19T03:01:31.081912518Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/hostname",
	        "HostsPath": "/var/lib/docker/containers/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/hosts",
	        "LogPath": "/var/lib/docker/containers/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e-json.log",
	        "Name": "/embed-certs-592123",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-592123:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-592123",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e",
	                "LowerDir": "/var/lib/docker/overlay2/0339914a5a3675144df08f1c4c574bd9322eef4783e3f9e23b63823595a97dd7-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0339914a5a3675144df08f1c4c574bd9322eef4783e3f9e23b63823595a97dd7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0339914a5a3675144df08f1c4c574bd9322eef4783e3f9e23b63823595a97dd7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0339914a5a3675144df08f1c4c574bd9322eef4783e3f9e23b63823595a97dd7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-592123",
	                "Source": "/var/lib/docker/volumes/embed-certs-592123/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-592123",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-592123",
	                "name.minikube.sigs.k8s.io": "embed-certs-592123",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "65df5bc049f755bd22bae72e993ca50c060f2b3c27dc19f2f54eaca5562ea46f",
	            "SandboxKey": "/var/run/docker/netns/65df5bc049f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34920"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34921"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34924"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34922"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34923"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-592123": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:2e:5c:18:03:1d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b71e8f31cf38cfb3f1f6842ca4b0d69a179bc8211fb70e2032bcc5a594b1fbd8",
	                    "EndpointID": "be6c90ae69d1c3b7d840b0ca4621cdd4917d32ff230486330f11db4370700f6f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-592123",
	                        "dac66acc5df4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-592123 -n embed-certs-592123
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-592123 -n embed-certs-592123: exit status 2 (358.00597ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-592123 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-592123 logs -n 25: (1.675974842s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:59 UTC │
	│ image   │ old-k8s-version-525469 image list --format=json                                                                                                                                                                                               │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ pause   │ -p old-k8s-version-525469 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │                     │
	│ start   │ -p cert-expiration-422184 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ delete  │ -p cert-expiration-422184                                                                                                                                                                                                                     │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-579203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-579203 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-592123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p embed-certs-592123 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-579203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ addons  │ enable dashboard -p embed-certs-592123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ image   │ default-k8s-diff-port-579203 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p default-k8s-diff-port-579203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p disable-driver-mounts-722439                                                                                                                                                                                                               │ disable-driver-mounts-722439 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ image   │ embed-certs-592123 image list --format=json                                                                                                                                                                                                   │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p embed-certs-592123 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 03:02:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 03:02:35.455853 1662687 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:02:35.456010 1662687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:02:35.456044 1662687 out.go:374] Setting ErrFile to fd 2...
	I1119 03:02:35.456056 1662687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:02:35.456332 1662687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:02:35.456780 1662687 out.go:368] Setting JSON to false
	I1119 03:02:35.457869 1662687 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38683,"bootTime":1763482673,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 03:02:35.457937 1662687 start.go:143] virtualization:  
	I1119 03:02:35.462367 1662687 out.go:179] * [no-preload-800908] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 03:02:35.466579 1662687 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 03:02:35.466647 1662687 notify.go:221] Checking for updates...
	I1119 03:02:35.472895 1662687 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 03:02:35.475923 1662687 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:02:35.478982 1662687 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 03:02:35.482008 1662687 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 03:02:35.484940 1662687 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 03:02:35.489794 1662687 config.go:182] Loaded profile config "embed-certs-592123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:02:35.489945 1662687 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 03:02:35.525697 1662687 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 03:02:35.525861 1662687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:02:35.597807 1662687 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 03:02:35.588766011 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:02:35.597916 1662687 docker.go:319] overlay module found
	I1119 03:02:35.601134 1662687 out.go:179] * Using the docker driver based on user configuration
	I1119 03:02:35.604082 1662687 start.go:309] selected driver: docker
	I1119 03:02:35.604103 1662687 start.go:930] validating driver "docker" against <nil>
	I1119 03:02:35.604117 1662687 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 03:02:35.604880 1662687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:02:35.660739 1662687 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 03:02:35.651049118 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:02:35.660897 1662687 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 03:02:35.661136 1662687 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:02:35.664082 1662687 out.go:179] * Using Docker driver with root privileges
	I1119 03:02:35.666850 1662687 cni.go:84] Creating CNI manager for ""
	I1119 03:02:35.666920 1662687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:02:35.666933 1662687 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 03:02:35.667016 1662687 start.go:353] cluster config:
	{Name:no-preload-800908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:02:35.670103 1662687 out.go:179] * Starting "no-preload-800908" primary control-plane node in "no-preload-800908" cluster
	I1119 03:02:35.673063 1662687 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 03:02:35.675961 1662687 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 03:02:35.678860 1662687 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:02:35.678936 1662687 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 03:02:35.678985 1662687 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/config.json ...
	I1119 03:02:35.679017 1662687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/config.json: {Name:mkbb903fef33ebfaa212203e6dd156ba4fd7ef3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:02:35.679250 1662687 cache.go:107] acquiring lock: {Name:mkb58f30e5376d33040dfa777b3f8180ea85082b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.679314 1662687 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1119 03:02:35.679329 1662687 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 87.489µs
	I1119 03:02:35.679337 1662687 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1119 03:02:35.679353 1662687 cache.go:107] acquiring lock: {Name:mk4427b1057ed3426220ced6aa14c26e167661f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.679428 1662687 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 03:02:35.679765 1662687 cache.go:107] acquiring lock: {Name:mke3a5e1f8219de1d6d968640b180760e94eaad4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.679937 1662687 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 03:02:35.680196 1662687 cache.go:107] acquiring lock: {Name:mkc90d3e387ee9423dce3105ec70e08f9a213a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.680331 1662687 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 03:02:35.680511 1662687 cache.go:107] acquiring lock: {Name:mk6ffbb0756aa279cf3ba05ddd5e5f7e66e5cbe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.680636 1662687 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 03:02:35.680882 1662687 cache.go:107] acquiring lock: {Name:mk88c3661a1e8c3438804e10f7c7d80646d19f18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.681004 1662687 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1119 03:02:35.681200 1662687 cache.go:107] acquiring lock: {Name:mk4358ffb1d662d66c4de9c14824434035268345 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.681311 1662687 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1119 03:02:35.681590 1662687 cache.go:107] acquiring lock: {Name:mk1d702ebd613a383e3fb22e99729e7baba0b90f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.681721 1662687 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 03:02:35.683203 1662687 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 03:02:35.683855 1662687 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 03:02:35.684617 1662687 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 03:02:35.685035 1662687 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 03:02:35.685244 1662687 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 03:02:35.685439 1662687 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1119 03:02:35.685640 1662687 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1119 03:02:35.704167 1662687 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 03:02:35.704191 1662687 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 03:02:35.704204 1662687 cache.go:243] Successfully downloaded all kic artifacts
	I1119 03:02:35.704227 1662687 start.go:360] acquireMachinesLock for no-preload-800908: {Name:mk6bdccc03286e3d7d2db959eee2861a6643234c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.704329 1662687 start.go:364] duration metric: took 82.213µs to acquireMachinesLock for "no-preload-800908"
	I1119 03:02:35.704359 1662687 start.go:93] Provisioning new machine with config: &{Name:no-preload-800908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:02:35.704432 1662687 start.go:125] createHost starting for "" (driver="docker")
	I1119 03:02:35.708048 1662687 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 03:02:35.708291 1662687 start.go:159] libmachine.API.Create for "no-preload-800908" (driver="docker")
	I1119 03:02:35.708338 1662687 client.go:173] LocalClient.Create starting
	I1119 03:02:35.708427 1662687 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem
	I1119 03:02:35.708461 1662687 main.go:143] libmachine: Decoding PEM data...
	I1119 03:02:35.708473 1662687 main.go:143] libmachine: Parsing certificate...
	I1119 03:02:35.708523 1662687 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem
	I1119 03:02:35.708540 1662687 main.go:143] libmachine: Decoding PEM data...
	I1119 03:02:35.708549 1662687 main.go:143] libmachine: Parsing certificate...
	I1119 03:02:35.708925 1662687 cli_runner.go:164] Run: docker network inspect no-preload-800908 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 03:02:35.733165 1662687 cli_runner.go:211] docker network inspect no-preload-800908 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 03:02:35.733320 1662687 network_create.go:284] running [docker network inspect no-preload-800908] to gather additional debugging logs...
	I1119 03:02:35.733365 1662687 cli_runner.go:164] Run: docker network inspect no-preload-800908
	W1119 03:02:35.751085 1662687 cli_runner.go:211] docker network inspect no-preload-800908 returned with exit code 1
	I1119 03:02:35.751115 1662687 network_create.go:287] error running [docker network inspect no-preload-800908]: docker network inspect no-preload-800908: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-800908 not found
	I1119 03:02:35.751129 1662687 network_create.go:289] output of [docker network inspect no-preload-800908]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-800908 not found
	
	** /stderr **
	I1119 03:02:35.751250 1662687 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:02:35.768246 1662687 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-30778cc553ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:62:24:59:d9:05:e6} reservation:<nil>}
	I1119 03:02:35.768642 1662687 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-564f8befa544 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:bb:c9:f1:3d:0c} reservation:<nil>}
	I1119 03:02:35.768898 1662687 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fccf9ce7bac2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:92:9c:a6:ca:f9:d9} reservation:<nil>}
	I1119 03:02:35.769205 1662687 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b71e8f31cf38 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:d4:9b:56:8c:d1} reservation:<nil>}
	I1119 03:02:35.769717 1662687 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c4efa0}
	I1119 03:02:35.769741 1662687 network_create.go:124] attempt to create docker network no-preload-800908 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 03:02:35.769802 1662687 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-800908 no-preload-800908
	I1119 03:02:35.845113 1662687 network_create.go:108] docker network no-preload-800908 192.168.85.0/24 created
	I1119 03:02:35.845186 1662687 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-800908" container
	I1119 03:02:35.845274 1662687 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 03:02:35.862331 1662687 cli_runner.go:164] Run: docker volume create no-preload-800908 --label name.minikube.sigs.k8s.io=no-preload-800908 --label created_by.minikube.sigs.k8s.io=true
	I1119 03:02:35.881071 1662687 oci.go:103] Successfully created a docker volume no-preload-800908
	I1119 03:02:35.881166 1662687 cli_runner.go:164] Run: docker run --rm --name no-preload-800908-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-800908 --entrypoint /usr/bin/test -v no-preload-800908:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 03:02:36.036021 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 03:02:36.041020 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1119 03:02:36.052050 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 03:02:36.065402 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1119 03:02:36.067400 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 03:02:36.155708 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1119 03:02:36.155751 1662687 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 474.880794ms
	I1119 03:02:36.155778 1662687 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1119 03:02:36.157157 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 03:02:36.254093 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 03:02:36.543532 1662687 oci.go:107] Successfully prepared a docker volume no-preload-800908
	I1119 03:02:36.543570 1662687 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1119 03:02:36.543698 1662687 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 03:02:36.543805 1662687 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 03:02:36.591415 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1119 03:02:36.591486 1662687 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 910.975286ms
	I1119 03:02:36.591514 1662687 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1119 03:02:36.607106 1662687 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-800908 --name no-preload-800908 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-800908 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-800908 --network no-preload-800908 --ip 192.168.85.2 --volume no-preload-800908:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 03:02:36.951543 1662687 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Running}}
	I1119 03:02:37.100761 1662687 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:02:37.144946 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1119 03:02:37.144986 1662687 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.463399861s
	I1119 03:02:37.145001 1662687 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1119 03:02:37.206270 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1119 03:02:37.206296 1662687 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.526104035s
	I1119 03:02:37.206308 1662687 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1119 03:02:37.210065 1662687 cli_runner.go:164] Run: docker exec no-preload-800908 stat /var/lib/dpkg/alternatives/iptables
	I1119 03:02:37.213978 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1119 03:02:37.214001 1662687 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.534648003s
	I1119 03:02:37.214016 1662687 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1119 03:02:37.297807 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1119 03:02:37.297833 1662687 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.618072527s
	I1119 03:02:37.297845 1662687 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1119 03:02:37.299191 1662687 oci.go:144] the created container "no-preload-800908" has a running status.
	I1119 03:02:37.299215 1662687 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa...
	I1119 03:02:37.743801 1662687 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 03:02:37.771982 1662687 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:02:37.788058 1662687 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 03:02:37.788082 1662687 kic_runner.go:114] Args: [docker exec --privileged no-preload-800908 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 03:02:37.836147 1662687 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:02:37.856889 1662687 machine.go:94] provisionDockerMachine start ...
	I1119 03:02:37.856990 1662687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:02:37.875528 1662687 main.go:143] libmachine: Using SSH client type: native
	I1119 03:02:37.875859 1662687 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34925 <nil> <nil>}
	I1119 03:02:37.875877 1662687 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 03:02:37.876540 1662687 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59892->127.0.0.1:34925: read: connection reset by peer
	I1119 03:02:38.961418 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1119 03:02:38.961443 1662687 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 3.280261838s
	I1119 03:02:38.961454 1662687 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1119 03:02:38.961466 1662687 cache.go:87] Successfully saved all images to host disk.
	
	
	==> CRI-O <==
	Nov 19 03:02:18 embed-certs-592123 crio[650]: time="2025-11-19T03:02:18.24779683Z" level=info msg="Removed container 93204908b3c68b6897a0530972a0508b65471c393c4909e541cdfbbcc62d813d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl/dashboard-metrics-scraper" id=db6e3976-ed35-4cba-aeef-4e0ddebb723a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 03:02:22 embed-certs-592123 conmon[1120]: conmon c25eeb92b97431c67651 <ninfo>: container 1127 exited with status 1
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.251796009Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1cda5c70-40ae-4baa-97d0-217699e5a1c9 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.252667952Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3ce11f92-4492-456a-a5aa-18c3e42e16b9 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.253498984Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=9b751877-0f16-4697-9e4a-b2eb1a58292c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.253695409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.262068421Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.262245401Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/da0e0a0d5f91b39ddc9187045af3dcb459d761c7152ce1400e3435907350e31d/merged/etc/passwd: no such file or directory"
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.26227214Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/da0e0a0d5f91b39ddc9187045af3dcb459d761c7152ce1400e3435907350e31d/merged/etc/group: no such file or directory"
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.262516482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.277154012Z" level=info msg="Created container d75fb98c294954a086d7ac0cd21c45155c83c760cc142791e7d4eca8043ba541: kube-system/storage-provisioner/storage-provisioner" id=9b751877-0f16-4697-9e4a-b2eb1a58292c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.278238174Z" level=info msg="Starting container: d75fb98c294954a086d7ac0cd21c45155c83c760cc142791e7d4eca8043ba541" id=016f7afc-1e76-40f8-b72e-8fe2cb445521 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.280086178Z" level=info msg="Started container" PID=1647 containerID=d75fb98c294954a086d7ac0cd21c45155c83c760cc142791e7d4eca8043ba541 description=kube-system/storage-provisioner/storage-provisioner id=016f7afc-1e76-40f8-b72e-8fe2cb445521 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d2492c4dee44c9ed9fe8ffeb8923d46abaa2b4170225b84e1d1edd0f783239cc
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.432564182Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.436829118Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.436863357Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.436884337Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.440859928Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.440895217Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.44091737Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.444560605Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.444595575Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.444618959Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.44803386Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.448067196Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d75fb98c29495       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   d2492c4dee44c       storage-provisioner                          kube-system
	9ec1e7c5c349f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   ffe0d4b7a57fd       dashboard-metrics-scraper-6ffb444bf9-dftfl   kubernetes-dashboard
	b107b252e2343       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago      Running             kubernetes-dashboard        0                   5f082440126cf       kubernetes-dashboard-855c9754f9-76f6n        kubernetes-dashboard
	00cd4bbd35ff9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago      Running             coredns                     1                   f903c213bc8e2       coredns-66bc5c9577-vtc44                     kube-system
	92bc8dc1a004d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   9d89487a4f031       busybox                                      default
	693b87a40338d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   e96e5ce4ffa3c       kindnet-sv99p                                kube-system
	de8510aae91eb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           49 seconds ago      Running             kube-proxy                  1                   5300a780ed2e2       kube-proxy-55pcf                             kube-system
	c25eeb92b9743       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   d2492c4dee44c       storage-provisioner                          kube-system
	28baf9cda670a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   f7854adca359c       kube-apiserver-embed-certs-592123            kube-system
	44051fa115dbd       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   f6d7b195302d2       kube-controller-manager-embed-certs-592123   kube-system
	0c30389a4661b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   8758ace803ba8       etcd-embed-certs-592123                      kube-system
	50a2bdb9c6751       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   5630999172fc3       kube-scheduler-embed-certs-592123            kube-system
	
	
	==> coredns [00cd4bbd35ff9b6d4771e35883c59aeba78d227be099a95a6bb86d479cf45616] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52602 - 9460 "HINFO IN 442879192984157348.4078967189326169954. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.026935516s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-592123
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-592123
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=embed-certs-592123
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T03_00_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 03:00:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-592123
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 03:02:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 03:02:21 +0000   Wed, 19 Nov 2025 03:00:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 03:02:21 +0000   Wed, 19 Nov 2025 03:00:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 03:02:21 +0000   Wed, 19 Nov 2025 03:00:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 03:02:21 +0000   Wed, 19 Nov 2025 03:01:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-592123
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                c8c3e11e-b7bd-48ff-908e-852c6643928c
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-vtc44                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-embed-certs-592123                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-sv99p                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-embed-certs-592123             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-embed-certs-592123    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-55pcf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-embed-certs-592123             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dftfl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-76f6n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node embed-certs-592123 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node embed-certs-592123 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m33s)  kubelet          Node embed-certs-592123 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node embed-certs-592123 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node embed-certs-592123 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node embed-certs-592123 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m20s                  node-controller  Node embed-certs-592123 event: Registered Node embed-certs-592123 in Controller
	  Normal   NodeReady                97s                    kubelet          Node embed-certs-592123 status is now: NodeReady
	  Normal   Starting                 60s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 60s)      kubelet          Node embed-certs-592123 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 60s)      kubelet          Node embed-certs-592123 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 60s)      kubelet          Node embed-certs-592123 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                    node-controller  Node embed-certs-592123 event: Registered Node embed-certs-592123 in Controller
	
	
	==> dmesg <==
	[Nov19 02:38] overlayfs: idmapped layers are currently not supported
	[Nov19 02:39] overlayfs: idmapped layers are currently not supported
	[Nov19 02:41] overlayfs: idmapped layers are currently not supported
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	[Nov19 03:00] overlayfs: idmapped layers are currently not supported
	[  +8.385032] overlayfs: idmapped layers are currently not supported
	[Nov19 03:01] overlayfs: idmapped layers are currently not supported
	[  +9.842210] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0c30389a4661b622b8e4e66ed3373832cf9f4abe199dc1ec782692aa5b76a699] <==
	{"level":"warn","ts":"2025-11-19T03:01:48.554511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.568592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.579027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.605360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.623668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.637179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.662106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.699970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.700121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.717605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.745784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.767174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.793355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.813933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.826177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.851013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.873061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.885859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.907306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.939496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.979447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:49.015048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:49.031723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:49.063202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:49.170814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58760","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:02:42 up 10:44,  0 user,  load average: 3.80, 3.44, 2.76
	Linux embed-certs-592123 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [693b87a40338d1ed91a893430753efcc324f88bb8889e2774deb45e612737e46] <==
	I1119 03:01:52.250575       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 03:01:52.281784       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 03:01:52.281933       1 main.go:148] setting mtu 1500 for CNI 
	I1119 03:01:52.281946       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 03:01:52.281961       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T03:01:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 03:01:52.436533       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 03:01:52.436565       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 03:01:52.436574       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 03:01:52.436876       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 03:02:22.427475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 03:02:22.437126       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 03:02:22.437137       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 03:02:22.437252       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 03:02:23.937283       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 03:02:23.937317       1 metrics.go:72] Registering metrics
	I1119 03:02:23.937388       1 controller.go:711] "Syncing nftables rules"
	I1119 03:02:32.432268       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 03:02:32.432323       1 main.go:301] handling current node
	
	
	==> kube-apiserver [28baf9cda670ab54ffce2ff7181d4841299d3d55c51eab8df2a52c1c366a4111] <==
	I1119 03:01:50.753444       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 03:01:50.755618       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 03:01:50.755637       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 03:01:50.755926       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 03:01:50.756933       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 03:01:50.792638       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:01:50.798830       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1119 03:01:50.798918       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 03:01:50.800444       1 aggregator.go:171] initial CRD sync complete...
	I1119 03:01:50.800471       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 03:01:50.800479       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 03:01:50.800486       1 cache.go:39] Caches are synced for autoregister controller
	I1119 03:01:50.871009       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1119 03:01:51.010424       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 03:01:51.167851       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 03:01:51.239677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 03:01:52.515081       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 03:01:52.625821       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 03:01:52.704773       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 03:01:52.735906       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 03:01:52.930658       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.186.228"}
	I1119 03:01:52.969692       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.30.238"}
	I1119 03:01:54.806841       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 03:01:55.112477       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 03:01:55.205539       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [44051fa115dbdefd2547da0097f35a9d487cbcc9b4becc2a70f91a77a0d1da21] <==
	I1119 03:01:54.713430       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:01:54.713826       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 03:01:54.715200       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:01:54.731354       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 03:01:54.735484       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 03:01:54.739936       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 03:01:54.740055       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 03:01:54.740111       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 03:01:54.740140       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 03:01:54.740166       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 03:01:54.742555       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 03:01:54.747178       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 03:01:54.747264       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 03:01:54.747278       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 03:01:54.747688       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 03:01:54.748075       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 03:01:54.755333       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 03:01:54.758268       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 03:01:54.762753       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 03:01:54.763156       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 03:01:54.766427       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:01:54.797212       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:01:54.797311       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 03:01:54.797342       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 03:01:54.797980       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [de8510aae91ebe4f52e6549cedefec1262166611b28429518e2b4db5fffb05e5] <==
	I1119 03:01:52.462784       1 server_linux.go:53] "Using iptables proxy"
	I1119 03:01:52.608894       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 03:01:52.718073       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 03:01:52.718109       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 03:01:52.718191       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 03:01:53.113068       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 03:01:53.134401       1 server_linux.go:132] "Using iptables Proxier"
	I1119 03:01:53.152653       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 03:01:53.153139       1 server.go:527] "Version info" version="v1.34.1"
	I1119 03:01:53.153389       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:01:53.156530       1 config.go:200] "Starting service config controller"
	I1119 03:01:53.156586       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 03:01:53.156685       1 config.go:106] "Starting endpoint slice config controller"
	I1119 03:01:53.156718       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 03:01:53.156756       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 03:01:53.156781       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 03:01:53.157470       1 config.go:309] "Starting node config controller"
	I1119 03:01:53.157551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 03:01:53.157592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 03:01:53.257787       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 03:01:53.258326       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 03:01:53.258343       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [50a2bdb9c67513a1526c7008d09101b3db95d7bac468c5e2f2f7dcda041de7b5] <==
	I1119 03:01:44.650167       1 serving.go:386] Generated self-signed cert in-memory
	I1119 03:01:51.379745       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 03:01:51.379812       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:01:51.421090       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 03:01:51.421348       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 03:01:51.421413       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 03:01:51.421468       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 03:01:51.459014       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:01:51.459034       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:01:51.459069       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:01:51.459076       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:01:51.543139       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 03:01:51.560057       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:01:51.560182       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 03:01:55 embed-certs-592123 kubelet[778]: I1119 03:01:55.463442     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnl9r\" (UniqueName: \"kubernetes.io/projected/087ebb06-98c5-4966-a059-ba81f8ae1b3d-kube-api-access-qnl9r\") pod \"dashboard-metrics-scraper-6ffb444bf9-dftfl\" (UID: \"087ebb06-98c5-4966-a059-ba81f8ae1b3d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl"
	Nov 19 03:01:55 embed-certs-592123 kubelet[778]: I1119 03:01:55.463471     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d7ebcebd-3f82-4d27-8b51-e33625e09608-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-76f6n\" (UID: \"d7ebcebd-3f82-4d27-8b51-e33625e09608\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-76f6n"
	Nov 19 03:01:55 embed-certs-592123 kubelet[778]: I1119 03:01:55.463500     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzffk\" (UniqueName: \"kubernetes.io/projected/d7ebcebd-3f82-4d27-8b51-e33625e09608-kube-api-access-wzffk\") pod \"kubernetes-dashboard-855c9754f9-76f6n\" (UID: \"d7ebcebd-3f82-4d27-8b51-e33625e09608\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-76f6n"
	Nov 19 03:01:55 embed-certs-592123 kubelet[778]: W1119 03:01:55.699045     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/crio-ffe0d4b7a57fd4d57cc092294130adc66988df4f247cf1b4b7cdcd63d01b9954 WatchSource:0}: Error finding container ffe0d4b7a57fd4d57cc092294130adc66988df4f247cf1b4b7cdcd63d01b9954: Status 404 returned error can't find the container with id ffe0d4b7a57fd4d57cc092294130adc66988df4f247cf1b4b7cdcd63d01b9954
	Nov 19 03:02:02 embed-certs-592123 kubelet[778]: I1119 03:02:02.189052     778 scope.go:117] "RemoveContainer" containerID="17a9d01c833acc384da867b00fc47910b5b45ee2861c6a8defa5d8286de68ef6"
	Nov 19 03:02:03 embed-certs-592123 kubelet[778]: I1119 03:02:03.197404     778 scope.go:117] "RemoveContainer" containerID="93204908b3c68b6897a0530972a0508b65471c393c4909e541cdfbbcc62d813d"
	Nov 19 03:02:03 embed-certs-592123 kubelet[778]: E1119 03:02:03.197630     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dftfl_kubernetes-dashboard(087ebb06-98c5-4966-a059-ba81f8ae1b3d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl" podUID="087ebb06-98c5-4966-a059-ba81f8ae1b3d"
	Nov 19 03:02:03 embed-certs-592123 kubelet[778]: I1119 03:02:03.198614     778 scope.go:117] "RemoveContainer" containerID="17a9d01c833acc384da867b00fc47910b5b45ee2861c6a8defa5d8286de68ef6"
	Nov 19 03:02:04 embed-certs-592123 kubelet[778]: I1119 03:02:04.200792     778 scope.go:117] "RemoveContainer" containerID="93204908b3c68b6897a0530972a0508b65471c393c4909e541cdfbbcc62d813d"
	Nov 19 03:02:04 embed-certs-592123 kubelet[778]: E1119 03:02:04.200935     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dftfl_kubernetes-dashboard(087ebb06-98c5-4966-a059-ba81f8ae1b3d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl" podUID="087ebb06-98c5-4966-a059-ba81f8ae1b3d"
	Nov 19 03:02:05 embed-certs-592123 kubelet[778]: I1119 03:02:05.646966     778 scope.go:117] "RemoveContainer" containerID="93204908b3c68b6897a0530972a0508b65471c393c4909e541cdfbbcc62d813d"
	Nov 19 03:02:05 embed-certs-592123 kubelet[778]: E1119 03:02:05.647157     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dftfl_kubernetes-dashboard(087ebb06-98c5-4966-a059-ba81f8ae1b3d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl" podUID="087ebb06-98c5-4966-a059-ba81f8ae1b3d"
	Nov 19 03:02:17 embed-certs-592123 kubelet[778]: I1119 03:02:17.980355     778 scope.go:117] "RemoveContainer" containerID="93204908b3c68b6897a0530972a0508b65471c393c4909e541cdfbbcc62d813d"
	Nov 19 03:02:18 embed-certs-592123 kubelet[778]: I1119 03:02:18.236660     778 scope.go:117] "RemoveContainer" containerID="93204908b3c68b6897a0530972a0508b65471c393c4909e541cdfbbcc62d813d"
	Nov 19 03:02:19 embed-certs-592123 kubelet[778]: I1119 03:02:19.240337     778 scope.go:117] "RemoveContainer" containerID="9ec1e7c5c349fe1b082432751f82acffb163fa8de01a3b8cbd8e6a9956820502"
	Nov 19 03:02:19 embed-certs-592123 kubelet[778]: E1119 03:02:19.240500     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dftfl_kubernetes-dashboard(087ebb06-98c5-4966-a059-ba81f8ae1b3d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl" podUID="087ebb06-98c5-4966-a059-ba81f8ae1b3d"
	Nov 19 03:02:19 embed-certs-592123 kubelet[778]: I1119 03:02:19.255922     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-76f6n" podStartSLOduration=12.99969319 podStartE2EDuration="24.255038995s" podCreationTimestamp="2025-11-19 03:01:55 +0000 UTC" firstStartedPulling="2025-11-19 03:01:55.728851766 +0000 UTC m=+14.118829444" lastFinishedPulling="2025-11-19 03:02:06.984197563 +0000 UTC m=+25.374175249" observedRunningTime="2025-11-19 03:02:07.231786735 +0000 UTC m=+25.621764412" watchObservedRunningTime="2025-11-19 03:02:19.255038995 +0000 UTC m=+37.645016673"
	Nov 19 03:02:23 embed-certs-592123 kubelet[778]: I1119 03:02:23.251373     778 scope.go:117] "RemoveContainer" containerID="c25eeb92b97431c67651c962c7e0e3bc4fdf21f9c78ccc749a241b682fb1ee70"
	Nov 19 03:02:25 embed-certs-592123 kubelet[778]: I1119 03:02:25.647023     778 scope.go:117] "RemoveContainer" containerID="9ec1e7c5c349fe1b082432751f82acffb163fa8de01a3b8cbd8e6a9956820502"
	Nov 19 03:02:25 embed-certs-592123 kubelet[778]: E1119 03:02:25.647194     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dftfl_kubernetes-dashboard(087ebb06-98c5-4966-a059-ba81f8ae1b3d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl" podUID="087ebb06-98c5-4966-a059-ba81f8ae1b3d"
	Nov 19 03:02:36 embed-certs-592123 kubelet[778]: I1119 03:02:36.979861     778 scope.go:117] "RemoveContainer" containerID="9ec1e7c5c349fe1b082432751f82acffb163fa8de01a3b8cbd8e6a9956820502"
	Nov 19 03:02:36 embed-certs-592123 kubelet[778]: E1119 03:02:36.980038     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dftfl_kubernetes-dashboard(087ebb06-98c5-4966-a059-ba81f8ae1b3d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl" podUID="087ebb06-98c5-4966-a059-ba81f8ae1b3d"
	Nov 19 03:02:39 embed-certs-592123 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 03:02:39 embed-certs-592123 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 03:02:39 embed-certs-592123 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b107b252e23438e4e91346429aa292ef99768ea80e02ebc445b2a52c2f401d41] <==
	2025/11/19 03:02:07 Using namespace: kubernetes-dashboard
	2025/11/19 03:02:07 Using in-cluster config to connect to apiserver
	2025/11/19 03:02:07 Using secret token for csrf signing
	2025/11/19 03:02:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 03:02:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 03:02:07 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 03:02:07 Generating JWE encryption key
	2025/11/19 03:02:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 03:02:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 03:02:07 Initializing JWE encryption key from synchronized object
	2025/11/19 03:02:07 Creating in-cluster Sidecar client
	2025/11/19 03:02:07 Serving insecurely on HTTP port: 9090
	2025/11/19 03:02:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 03:02:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 03:02:07 Starting overwatch
	
	
	==> storage-provisioner [c25eeb92b97431c67651c962c7e0e3bc4fdf21f9c78ccc749a241b682fb1ee70] <==
	I1119 03:01:52.488861       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 03:02:22.497050       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d75fb98c294954a086d7ac0cd21c45155c83c760cc142791e7d4eca8043ba541] <==
	I1119 03:02:23.307918       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 03:02:23.319785       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 03:02:23.319837       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 03:02:23.323062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:26.779393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:31.039497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:34.637980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:37.692109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:40.714346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:40.722512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:02:40.722654       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 03:02:40.722831       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-592123_6ff3290a-7f89-4f14-a8c2-88b2b0dd9106!
	I1119 03:02:40.722886       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18542100-311c-4ccc-932d-a0e1133b54bb", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-592123_6ff3290a-7f89-4f14-a8c2-88b2b0dd9106 became leader
	W1119 03:02:40.730949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:40.736904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:02:40.823541       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-592123_6ff3290a-7f89-4f14-a8c2-88b2b0dd9106!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-592123 -n embed-certs-592123
E1119 03:02:42.679292 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-592123 -n embed-certs-592123: exit status 2 (483.252508ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-592123 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-592123
helpers_test.go:243: (dbg) docker inspect embed-certs-592123:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e",
	        "Created": "2025-11-19T02:59:47.671670147Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1658247,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T03:01:32.115303141Z",
	            "FinishedAt": "2025-11-19T03:01:31.081912518Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/hostname",
	        "HostsPath": "/var/lib/docker/containers/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/hosts",
	        "LogPath": "/var/lib/docker/containers/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e-json.log",
	        "Name": "/embed-certs-592123",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-592123:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-592123",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e",
	                "LowerDir": "/var/lib/docker/overlay2/0339914a5a3675144df08f1c4c574bd9322eef4783e3f9e23b63823595a97dd7-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0339914a5a3675144df08f1c4c574bd9322eef4783e3f9e23b63823595a97dd7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0339914a5a3675144df08f1c4c574bd9322eef4783e3f9e23b63823595a97dd7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0339914a5a3675144df08f1c4c574bd9322eef4783e3f9e23b63823595a97dd7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-592123",
	                "Source": "/var/lib/docker/volumes/embed-certs-592123/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-592123",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-592123",
	                "name.minikube.sigs.k8s.io": "embed-certs-592123",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "65df5bc049f755bd22bae72e993ca50c060f2b3c27dc19f2f54eaca5562ea46f",
	            "SandboxKey": "/var/run/docker/netns/65df5bc049f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34920"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34921"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34924"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34922"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34923"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-592123": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:2e:5c:18:03:1d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b71e8f31cf38cfb3f1f6842ca4b0d69a179bc8211fb70e2032bcc5a594b1fbd8",
	                    "EndpointID": "be6c90ae69d1c3b7d840b0ca4621cdd4917d32ff230486330f11db4370700f6f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-592123",
	                        "dac66acc5df4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-592123 -n embed-certs-592123
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-592123 -n embed-certs-592123: exit status 2 (504.671005ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-592123 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-592123 logs -n 25: (1.704384134s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:58 UTC │ 19 Nov 25 02:59 UTC │
	│ image   │ old-k8s-version-525469 image list --format=json                                                                                                                                                                                               │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ pause   │ -p old-k8s-version-525469 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │                     │
	│ start   │ -p cert-expiration-422184 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ delete  │ -p cert-expiration-422184                                                                                                                                                                                                                     │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-579203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-579203 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-592123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p embed-certs-592123 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-579203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ addons  │ enable dashboard -p embed-certs-592123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ image   │ default-k8s-diff-port-579203 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p default-k8s-diff-port-579203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p disable-driver-mounts-722439                                                                                                                                                                                                               │ disable-driver-mounts-722439 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ image   │ embed-certs-592123 image list --format=json                                                                                                                                                                                                   │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p embed-certs-592123 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 03:02:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 03:02:35.455853 1662687 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:02:35.456010 1662687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:02:35.456044 1662687 out.go:374] Setting ErrFile to fd 2...
	I1119 03:02:35.456056 1662687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:02:35.456332 1662687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:02:35.456780 1662687 out.go:368] Setting JSON to false
	I1119 03:02:35.457869 1662687 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38683,"bootTime":1763482673,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 03:02:35.457937 1662687 start.go:143] virtualization:  
	I1119 03:02:35.462367 1662687 out.go:179] * [no-preload-800908] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 03:02:35.466579 1662687 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 03:02:35.466647 1662687 notify.go:221] Checking for updates...
	I1119 03:02:35.472895 1662687 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 03:02:35.475923 1662687 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:02:35.478982 1662687 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 03:02:35.482008 1662687 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 03:02:35.484940 1662687 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 03:02:35.489794 1662687 config.go:182] Loaded profile config "embed-certs-592123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:02:35.489945 1662687 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 03:02:35.525697 1662687 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 03:02:35.525861 1662687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:02:35.597807 1662687 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 03:02:35.588766011 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:02:35.597916 1662687 docker.go:319] overlay module found
	I1119 03:02:35.601134 1662687 out.go:179] * Using the docker driver based on user configuration
	I1119 03:02:35.604082 1662687 start.go:309] selected driver: docker
	I1119 03:02:35.604103 1662687 start.go:930] validating driver "docker" against <nil>
	I1119 03:02:35.604117 1662687 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 03:02:35.604880 1662687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:02:35.660739 1662687 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 03:02:35.651049118 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:02:35.660897 1662687 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 03:02:35.661136 1662687 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:02:35.664082 1662687 out.go:179] * Using Docker driver with root privileges
	I1119 03:02:35.666850 1662687 cni.go:84] Creating CNI manager for ""
	I1119 03:02:35.666920 1662687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:02:35.666933 1662687 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 03:02:35.667016 1662687 start.go:353] cluster config:
	{Name:no-preload-800908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:02:35.670103 1662687 out.go:179] * Starting "no-preload-800908" primary control-plane node in "no-preload-800908" cluster
	I1119 03:02:35.673063 1662687 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 03:02:35.675961 1662687 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 03:02:35.678860 1662687 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:02:35.678936 1662687 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 03:02:35.678985 1662687 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/config.json ...
	I1119 03:02:35.679017 1662687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/config.json: {Name:mkbb903fef33ebfaa212203e6dd156ba4fd7ef3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:02:35.679250 1662687 cache.go:107] acquiring lock: {Name:mkb58f30e5376d33040dfa777b3f8180ea85082b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.679314 1662687 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1119 03:02:35.679329 1662687 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 87.489µs
	I1119 03:02:35.679337 1662687 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1119 03:02:35.679353 1662687 cache.go:107] acquiring lock: {Name:mk4427b1057ed3426220ced6aa14c26e167661f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.679428 1662687 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 03:02:35.679765 1662687 cache.go:107] acquiring lock: {Name:mke3a5e1f8219de1d6d968640b180760e94eaad4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.679937 1662687 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 03:02:35.680196 1662687 cache.go:107] acquiring lock: {Name:mkc90d3e387ee9423dce3105ec70e08f9a213a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.680331 1662687 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 03:02:35.680511 1662687 cache.go:107] acquiring lock: {Name:mk6ffbb0756aa279cf3ba05ddd5e5f7e66e5cbe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.680636 1662687 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 03:02:35.680882 1662687 cache.go:107] acquiring lock: {Name:mk88c3661a1e8c3438804e10f7c7d80646d19f18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.681004 1662687 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1119 03:02:35.681200 1662687 cache.go:107] acquiring lock: {Name:mk4358ffb1d662d66c4de9c14824434035268345 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.681311 1662687 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1119 03:02:35.681590 1662687 cache.go:107] acquiring lock: {Name:mk1d702ebd613a383e3fb22e99729e7baba0b90f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.681721 1662687 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 03:02:35.683203 1662687 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 03:02:35.683855 1662687 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 03:02:35.684617 1662687 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 03:02:35.685035 1662687 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 03:02:35.685244 1662687 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 03:02:35.685439 1662687 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1119 03:02:35.685640 1662687 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1119 03:02:35.704167 1662687 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 03:02:35.704191 1662687 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 03:02:35.704204 1662687 cache.go:243] Successfully downloaded all kic artifacts
	I1119 03:02:35.704227 1662687 start.go:360] acquireMachinesLock for no-preload-800908: {Name:mk6bdccc03286e3d7d2db959eee2861a6643234c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:35.704329 1662687 start.go:364] duration metric: took 82.213µs to acquireMachinesLock for "no-preload-800908"
	I1119 03:02:35.704359 1662687 start.go:93] Provisioning new machine with config: &{Name:no-preload-800908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:02:35.704432 1662687 start.go:125] createHost starting for "" (driver="docker")
	I1119 03:02:35.708048 1662687 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 03:02:35.708291 1662687 start.go:159] libmachine.API.Create for "no-preload-800908" (driver="docker")
	I1119 03:02:35.708338 1662687 client.go:173] LocalClient.Create starting
	I1119 03:02:35.708427 1662687 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem
	I1119 03:02:35.708461 1662687 main.go:143] libmachine: Decoding PEM data...
	I1119 03:02:35.708473 1662687 main.go:143] libmachine: Parsing certificate...
	I1119 03:02:35.708523 1662687 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem
	I1119 03:02:35.708540 1662687 main.go:143] libmachine: Decoding PEM data...
	I1119 03:02:35.708549 1662687 main.go:143] libmachine: Parsing certificate...
	I1119 03:02:35.708925 1662687 cli_runner.go:164] Run: docker network inspect no-preload-800908 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 03:02:35.733165 1662687 cli_runner.go:211] docker network inspect no-preload-800908 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 03:02:35.733320 1662687 network_create.go:284] running [docker network inspect no-preload-800908] to gather additional debugging logs...
	I1119 03:02:35.733365 1662687 cli_runner.go:164] Run: docker network inspect no-preload-800908
	W1119 03:02:35.751085 1662687 cli_runner.go:211] docker network inspect no-preload-800908 returned with exit code 1
	I1119 03:02:35.751115 1662687 network_create.go:287] error running [docker network inspect no-preload-800908]: docker network inspect no-preload-800908: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-800908 not found
	I1119 03:02:35.751129 1662687 network_create.go:289] output of [docker network inspect no-preload-800908]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-800908 not found
	
	** /stderr **
	I1119 03:02:35.751250 1662687 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:02:35.768246 1662687 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-30778cc553ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:62:24:59:d9:05:e6} reservation:<nil>}
	I1119 03:02:35.768642 1662687 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-564f8befa544 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:bb:c9:f1:3d:0c} reservation:<nil>}
	I1119 03:02:35.768898 1662687 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fccf9ce7bac2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:92:9c:a6:ca:f9:d9} reservation:<nil>}
	I1119 03:02:35.769205 1662687 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b71e8f31cf38 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:d4:9b:56:8c:d1} reservation:<nil>}
	I1119 03:02:35.769717 1662687 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c4efa0}
	I1119 03:02:35.769741 1662687 network_create.go:124] attempt to create docker network no-preload-800908 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 03:02:35.769802 1662687 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-800908 no-preload-800908
	I1119 03:02:35.845113 1662687 network_create.go:108] docker network no-preload-800908 192.168.85.0/24 created
	I1119 03:02:35.845186 1662687 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-800908" container
	I1119 03:02:35.845274 1662687 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 03:02:35.862331 1662687 cli_runner.go:164] Run: docker volume create no-preload-800908 --label name.minikube.sigs.k8s.io=no-preload-800908 --label created_by.minikube.sigs.k8s.io=true
	I1119 03:02:35.881071 1662687 oci.go:103] Successfully created a docker volume no-preload-800908
	I1119 03:02:35.881166 1662687 cli_runner.go:164] Run: docker run --rm --name no-preload-800908-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-800908 --entrypoint /usr/bin/test -v no-preload-800908:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 03:02:36.036021 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 03:02:36.041020 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1119 03:02:36.052050 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 03:02:36.065402 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1119 03:02:36.067400 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 03:02:36.155708 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1119 03:02:36.155751 1662687 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 474.880794ms
	I1119 03:02:36.155778 1662687 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1119 03:02:36.157157 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 03:02:36.254093 1662687 cache.go:162] opening:  /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 03:02:36.543532 1662687 oci.go:107] Successfully prepared a docker volume no-preload-800908
	I1119 03:02:36.543570 1662687 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1119 03:02:36.543698 1662687 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 03:02:36.543805 1662687 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 03:02:36.591415 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1119 03:02:36.591486 1662687 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 910.975286ms
	I1119 03:02:36.591514 1662687 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1119 03:02:36.607106 1662687 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-800908 --name no-preload-800908 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-800908 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-800908 --network no-preload-800908 --ip 192.168.85.2 --volume no-preload-800908:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 03:02:36.951543 1662687 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Running}}
	I1119 03:02:37.100761 1662687 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:02:37.144946 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1119 03:02:37.144986 1662687 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.463399861s
	I1119 03:02:37.145001 1662687 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1119 03:02:37.206270 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1119 03:02:37.206296 1662687 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.526104035s
	I1119 03:02:37.206308 1662687 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1119 03:02:37.210065 1662687 cli_runner.go:164] Run: docker exec no-preload-800908 stat /var/lib/dpkg/alternatives/iptables
	I1119 03:02:37.213978 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1119 03:02:37.214001 1662687 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.534648003s
	I1119 03:02:37.214016 1662687 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1119 03:02:37.297807 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1119 03:02:37.297833 1662687 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.618072527s
	I1119 03:02:37.297845 1662687 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1119 03:02:37.299191 1662687 oci.go:144] the created container "no-preload-800908" has a running status.
	I1119 03:02:37.299215 1662687 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa...
	I1119 03:02:37.743801 1662687 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 03:02:37.771982 1662687 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:02:37.788058 1662687 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 03:02:37.788082 1662687 kic_runner.go:114] Args: [docker exec --privileged no-preload-800908 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 03:02:37.836147 1662687 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:02:37.856889 1662687 machine.go:94] provisionDockerMachine start ...
	I1119 03:02:37.856990 1662687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:02:37.875528 1662687 main.go:143] libmachine: Using SSH client type: native
	I1119 03:02:37.875859 1662687 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34925 <nil> <nil>}
	I1119 03:02:37.875877 1662687 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 03:02:37.876540 1662687 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59892->127.0.0.1:34925: read: connection reset by peer
	I1119 03:02:38.961418 1662687 cache.go:157] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1119 03:02:38.961443 1662687 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 3.280261838s
	I1119 03:02:38.961454 1662687 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1119 03:02:38.961466 1662687 cache.go:87] Successfully saved all images to host disk.
	
	
	==> CRI-O <==
	Nov 19 03:02:18 embed-certs-592123 crio[650]: time="2025-11-19T03:02:18.24779683Z" level=info msg="Removed container 93204908b3c68b6897a0530972a0508b65471c393c4909e541cdfbbcc62d813d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl/dashboard-metrics-scraper" id=db6e3976-ed35-4cba-aeef-4e0ddebb723a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 03:02:22 embed-certs-592123 conmon[1120]: conmon c25eeb92b97431c67651 <ninfo>: container 1127 exited with status 1
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.251796009Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1cda5c70-40ae-4baa-97d0-217699e5a1c9 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.252667952Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3ce11f92-4492-456a-a5aa-18c3e42e16b9 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.253498984Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=9b751877-0f16-4697-9e4a-b2eb1a58292c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.253695409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.262068421Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.262245401Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/da0e0a0d5f91b39ddc9187045af3dcb459d761c7152ce1400e3435907350e31d/merged/etc/passwd: no such file or directory"
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.26227214Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/da0e0a0d5f91b39ddc9187045af3dcb459d761c7152ce1400e3435907350e31d/merged/etc/group: no such file or directory"
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.262516482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.277154012Z" level=info msg="Created container d75fb98c294954a086d7ac0cd21c45155c83c760cc142791e7d4eca8043ba541: kube-system/storage-provisioner/storage-provisioner" id=9b751877-0f16-4697-9e4a-b2eb1a58292c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.278238174Z" level=info msg="Starting container: d75fb98c294954a086d7ac0cd21c45155c83c760cc142791e7d4eca8043ba541" id=016f7afc-1e76-40f8-b72e-8fe2cb445521 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:02:23 embed-certs-592123 crio[650]: time="2025-11-19T03:02:23.280086178Z" level=info msg="Started container" PID=1647 containerID=d75fb98c294954a086d7ac0cd21c45155c83c760cc142791e7d4eca8043ba541 description=kube-system/storage-provisioner/storage-provisioner id=016f7afc-1e76-40f8-b72e-8fe2cb445521 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d2492c4dee44c9ed9fe8ffeb8923d46abaa2b4170225b84e1d1edd0f783239cc
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.432564182Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.436829118Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.436863357Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.436884337Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.440859928Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.440895217Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.44091737Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.444560605Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.444595575Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.444618959Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.44803386Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:02:32 embed-certs-592123 crio[650]: time="2025-11-19T03:02:32.448067196Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d75fb98c29495       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago       Running             storage-provisioner         2                   d2492c4dee44c       storage-provisioner                          kube-system
	9ec1e7c5c349f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   ffe0d4b7a57fd       dashboard-metrics-scraper-6ffb444bf9-dftfl   kubernetes-dashboard
	b107b252e2343       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   5f082440126cf       kubernetes-dashboard-855c9754f9-76f6n        kubernetes-dashboard
	00cd4bbd35ff9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   f903c213bc8e2       coredns-66bc5c9577-vtc44                     kube-system
	92bc8dc1a004d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago       Running             busybox                     1                   9d89487a4f031       busybox                                      default
	693b87a40338d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago       Running             kindnet-cni                 1                   e96e5ce4ffa3c       kindnet-sv99p                                kube-system
	de8510aae91eb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   5300a780ed2e2       kube-proxy-55pcf                             kube-system
	c25eeb92b9743       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago       Exited              storage-provisioner         1                   d2492c4dee44c       storage-provisioner                          kube-system
	28baf9cda670a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   f7854adca359c       kube-apiserver-embed-certs-592123            kube-system
	44051fa115dbd       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   f6d7b195302d2       kube-controller-manager-embed-certs-592123   kube-system
	0c30389a4661b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   8758ace803ba8       etcd-embed-certs-592123                      kube-system
	50a2bdb9c6751       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   5630999172fc3       kube-scheduler-embed-certs-592123            kube-system
	
	
	==> coredns [00cd4bbd35ff9b6d4771e35883c59aeba78d227be099a95a6bb86d479cf45616] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52602 - 9460 "HINFO IN 442879192984157348.4078967189326169954. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.026935516s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-592123
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-592123
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=embed-certs-592123
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T03_00_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 03:00:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-592123
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 03:02:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 03:02:21 +0000   Wed, 19 Nov 2025 03:00:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 03:02:21 +0000   Wed, 19 Nov 2025 03:00:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 03:02:21 +0000   Wed, 19 Nov 2025 03:00:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 03:02:21 +0000   Wed, 19 Nov 2025 03:01:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-592123
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                c8c3e11e-b7bd-48ff-908e-852c6643928c
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-vtc44                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 etcd-embed-certs-592123                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-sv99p                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-embed-certs-592123             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-embed-certs-592123    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-55pcf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-embed-certs-592123             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dftfl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-76f6n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Normal   Starting                 2m36s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node embed-certs-592123 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node embed-certs-592123 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m36s)  kubelet          Node embed-certs-592123 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node embed-certs-592123 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node embed-certs-592123 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node embed-certs-592123 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s                  node-controller  Node embed-certs-592123 event: Registered Node embed-certs-592123 in Controller
	  Normal   NodeReady                100s                   kubelet          Node embed-certs-592123 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 63s)      kubelet          Node embed-certs-592123 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 63s)      kubelet          Node embed-certs-592123 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 63s)      kubelet          Node embed-certs-592123 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node embed-certs-592123 event: Registered Node embed-certs-592123 in Controller
	
	
	==> dmesg <==
	[Nov19 02:38] overlayfs: idmapped layers are currently not supported
	[Nov19 02:39] overlayfs: idmapped layers are currently not supported
	[Nov19 02:41] overlayfs: idmapped layers are currently not supported
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	[Nov19 03:00] overlayfs: idmapped layers are currently not supported
	[  +8.385032] overlayfs: idmapped layers are currently not supported
	[Nov19 03:01] overlayfs: idmapped layers are currently not supported
	[  +9.842210] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0c30389a4661b622b8e4e66ed3373832cf9f4abe199dc1ec782692aa5b76a699] <==
	{"level":"warn","ts":"2025-11-19T03:01:48.554511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.568592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.579027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.605360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.623668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.637179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.662106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.699970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.700121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.717605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.745784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.767174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.793355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.813933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.826177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.851013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.873061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.885859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.907306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.939496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:48.979447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:49.015048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:49.031723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:49.063202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:01:49.170814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58760","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:02:44 up 10:44,  0 user,  load average: 3.74, 3.43, 2.77
	Linux embed-certs-592123 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [693b87a40338d1ed91a893430753efcc324f88bb8889e2774deb45e612737e46] <==
	I1119 03:01:52.250575       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 03:01:52.281784       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 03:01:52.281933       1 main.go:148] setting mtu 1500 for CNI 
	I1119 03:01:52.281946       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 03:01:52.281961       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T03:01:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 03:01:52.436533       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 03:01:52.436565       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 03:01:52.436574       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 03:01:52.436876       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 03:02:22.427475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 03:02:22.437126       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 03:02:22.437137       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 03:02:22.437252       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 03:02:23.937283       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 03:02:23.937317       1 metrics.go:72] Registering metrics
	I1119 03:02:23.937388       1 controller.go:711] "Syncing nftables rules"
	I1119 03:02:32.432268       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 03:02:32.432323       1 main.go:301] handling current node
	I1119 03:02:42.435956       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 03:02:42.435992       1 main.go:301] handling current node
	
	
	==> kube-apiserver [28baf9cda670ab54ffce2ff7181d4841299d3d55c51eab8df2a52c1c366a4111] <==
	I1119 03:01:50.753444       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 03:01:50.755618       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 03:01:50.755637       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 03:01:50.755926       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 03:01:50.756933       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 03:01:50.792638       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:01:50.798830       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1119 03:01:50.798918       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 03:01:50.800444       1 aggregator.go:171] initial CRD sync complete...
	I1119 03:01:50.800471       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 03:01:50.800479       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 03:01:50.800486       1 cache.go:39] Caches are synced for autoregister controller
	I1119 03:01:50.871009       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1119 03:01:51.010424       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 03:01:51.167851       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 03:01:51.239677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 03:01:52.515081       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 03:01:52.625821       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 03:01:52.704773       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 03:01:52.735906       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 03:01:52.930658       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.186.228"}
	I1119 03:01:52.969692       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.30.238"}
	I1119 03:01:54.806841       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 03:01:55.112477       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 03:01:55.205539       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [44051fa115dbdefd2547da0097f35a9d487cbcc9b4becc2a70f91a77a0d1da21] <==
	I1119 03:01:54.713430       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:01:54.713826       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 03:01:54.715200       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:01:54.731354       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 03:01:54.735484       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 03:01:54.739936       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 03:01:54.740055       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 03:01:54.740111       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 03:01:54.740140       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 03:01:54.740166       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 03:01:54.742555       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 03:01:54.747178       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 03:01:54.747264       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 03:01:54.747278       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 03:01:54.747688       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 03:01:54.748075       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 03:01:54.755333       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 03:01:54.758268       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 03:01:54.762753       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 03:01:54.763156       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 03:01:54.766427       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:01:54.797212       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:01:54.797311       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 03:01:54.797342       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 03:01:54.797980       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [de8510aae91ebe4f52e6549cedefec1262166611b28429518e2b4db5fffb05e5] <==
	I1119 03:01:52.462784       1 server_linux.go:53] "Using iptables proxy"
	I1119 03:01:52.608894       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 03:01:52.718073       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 03:01:52.718109       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 03:01:52.718191       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 03:01:53.113068       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 03:01:53.134401       1 server_linux.go:132] "Using iptables Proxier"
	I1119 03:01:53.152653       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 03:01:53.153139       1 server.go:527] "Version info" version="v1.34.1"
	I1119 03:01:53.153389       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:01:53.156530       1 config.go:200] "Starting service config controller"
	I1119 03:01:53.156586       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 03:01:53.156685       1 config.go:106] "Starting endpoint slice config controller"
	I1119 03:01:53.156718       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 03:01:53.156756       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 03:01:53.156781       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 03:01:53.157470       1 config.go:309] "Starting node config controller"
	I1119 03:01:53.157551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 03:01:53.157592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 03:01:53.257787       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 03:01:53.258326       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 03:01:53.258343       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [50a2bdb9c67513a1526c7008d09101b3db95d7bac468c5e2f2f7dcda041de7b5] <==
	I1119 03:01:44.650167       1 serving.go:386] Generated self-signed cert in-memory
	I1119 03:01:51.379745       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 03:01:51.379812       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:01:51.421090       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 03:01:51.421348       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 03:01:51.421413       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 03:01:51.421468       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 03:01:51.459014       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:01:51.459034       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:01:51.459069       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:01:51.459076       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:01:51.543139       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 03:01:51.560057       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:01:51.560182       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 03:01:55 embed-certs-592123 kubelet[778]: I1119 03:01:55.463442     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnl9r\" (UniqueName: \"kubernetes.io/projected/087ebb06-98c5-4966-a059-ba81f8ae1b3d-kube-api-access-qnl9r\") pod \"dashboard-metrics-scraper-6ffb444bf9-dftfl\" (UID: \"087ebb06-98c5-4966-a059-ba81f8ae1b3d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl"
	Nov 19 03:01:55 embed-certs-592123 kubelet[778]: I1119 03:01:55.463471     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d7ebcebd-3f82-4d27-8b51-e33625e09608-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-76f6n\" (UID: \"d7ebcebd-3f82-4d27-8b51-e33625e09608\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-76f6n"
	Nov 19 03:01:55 embed-certs-592123 kubelet[778]: I1119 03:01:55.463500     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzffk\" (UniqueName: \"kubernetes.io/projected/d7ebcebd-3f82-4d27-8b51-e33625e09608-kube-api-access-wzffk\") pod \"kubernetes-dashboard-855c9754f9-76f6n\" (UID: \"d7ebcebd-3f82-4d27-8b51-e33625e09608\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-76f6n"
	Nov 19 03:01:55 embed-certs-592123 kubelet[778]: W1119 03:01:55.699045     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dac66acc5df417ae4fe7f3148566f999b25ed8eb465f085a14cb838106ad0a5e/crio-ffe0d4b7a57fd4d57cc092294130adc66988df4f247cf1b4b7cdcd63d01b9954 WatchSource:0}: Error finding container ffe0d4b7a57fd4d57cc092294130adc66988df4f247cf1b4b7cdcd63d01b9954: Status 404 returned error can't find the container with id ffe0d4b7a57fd4d57cc092294130adc66988df4f247cf1b4b7cdcd63d01b9954
	Nov 19 03:02:02 embed-certs-592123 kubelet[778]: I1119 03:02:02.189052     778 scope.go:117] "RemoveContainer" containerID="17a9d01c833acc384da867b00fc47910b5b45ee2861c6a8defa5d8286de68ef6"
	Nov 19 03:02:03 embed-certs-592123 kubelet[778]: I1119 03:02:03.197404     778 scope.go:117] "RemoveContainer" containerID="93204908b3c68b6897a0530972a0508b65471c393c4909e541cdfbbcc62d813d"
	Nov 19 03:02:03 embed-certs-592123 kubelet[778]: E1119 03:02:03.197630     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dftfl_kubernetes-dashboard(087ebb06-98c5-4966-a059-ba81f8ae1b3d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl" podUID="087ebb06-98c5-4966-a059-ba81f8ae1b3d"
	Nov 19 03:02:03 embed-certs-592123 kubelet[778]: I1119 03:02:03.198614     778 scope.go:117] "RemoveContainer" containerID="17a9d01c833acc384da867b00fc47910b5b45ee2861c6a8defa5d8286de68ef6"
	Nov 19 03:02:04 embed-certs-592123 kubelet[778]: I1119 03:02:04.200792     778 scope.go:117] "RemoveContainer" containerID="93204908b3c68b6897a0530972a0508b65471c393c4909e541cdfbbcc62d813d"
	Nov 19 03:02:04 embed-certs-592123 kubelet[778]: E1119 03:02:04.200935     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dftfl_kubernetes-dashboard(087ebb06-98c5-4966-a059-ba81f8ae1b3d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl" podUID="087ebb06-98c5-4966-a059-ba81f8ae1b3d"
	Nov 19 03:02:05 embed-certs-592123 kubelet[778]: I1119 03:02:05.646966     778 scope.go:117] "RemoveContainer" containerID="93204908b3c68b6897a0530972a0508b65471c393c4909e541cdfbbcc62d813d"
	Nov 19 03:02:05 embed-certs-592123 kubelet[778]: E1119 03:02:05.647157     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dftfl_kubernetes-dashboard(087ebb06-98c5-4966-a059-ba81f8ae1b3d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl" podUID="087ebb06-98c5-4966-a059-ba81f8ae1b3d"
	Nov 19 03:02:17 embed-certs-592123 kubelet[778]: I1119 03:02:17.980355     778 scope.go:117] "RemoveContainer" containerID="93204908b3c68b6897a0530972a0508b65471c393c4909e541cdfbbcc62d813d"
	Nov 19 03:02:18 embed-certs-592123 kubelet[778]: I1119 03:02:18.236660     778 scope.go:117] "RemoveContainer" containerID="93204908b3c68b6897a0530972a0508b65471c393c4909e541cdfbbcc62d813d"
	Nov 19 03:02:19 embed-certs-592123 kubelet[778]: I1119 03:02:19.240337     778 scope.go:117] "RemoveContainer" containerID="9ec1e7c5c349fe1b082432751f82acffb163fa8de01a3b8cbd8e6a9956820502"
	Nov 19 03:02:19 embed-certs-592123 kubelet[778]: E1119 03:02:19.240500     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dftfl_kubernetes-dashboard(087ebb06-98c5-4966-a059-ba81f8ae1b3d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl" podUID="087ebb06-98c5-4966-a059-ba81f8ae1b3d"
	Nov 19 03:02:19 embed-certs-592123 kubelet[778]: I1119 03:02:19.255922     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-76f6n" podStartSLOduration=12.99969319 podStartE2EDuration="24.255038995s" podCreationTimestamp="2025-11-19 03:01:55 +0000 UTC" firstStartedPulling="2025-11-19 03:01:55.728851766 +0000 UTC m=+14.118829444" lastFinishedPulling="2025-11-19 03:02:06.984197563 +0000 UTC m=+25.374175249" observedRunningTime="2025-11-19 03:02:07.231786735 +0000 UTC m=+25.621764412" watchObservedRunningTime="2025-11-19 03:02:19.255038995 +0000 UTC m=+37.645016673"
	Nov 19 03:02:23 embed-certs-592123 kubelet[778]: I1119 03:02:23.251373     778 scope.go:117] "RemoveContainer" containerID="c25eeb92b97431c67651c962c7e0e3bc4fdf21f9c78ccc749a241b682fb1ee70"
	Nov 19 03:02:25 embed-certs-592123 kubelet[778]: I1119 03:02:25.647023     778 scope.go:117] "RemoveContainer" containerID="9ec1e7c5c349fe1b082432751f82acffb163fa8de01a3b8cbd8e6a9956820502"
	Nov 19 03:02:25 embed-certs-592123 kubelet[778]: E1119 03:02:25.647194     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dftfl_kubernetes-dashboard(087ebb06-98c5-4966-a059-ba81f8ae1b3d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl" podUID="087ebb06-98c5-4966-a059-ba81f8ae1b3d"
	Nov 19 03:02:36 embed-certs-592123 kubelet[778]: I1119 03:02:36.979861     778 scope.go:117] "RemoveContainer" containerID="9ec1e7c5c349fe1b082432751f82acffb163fa8de01a3b8cbd8e6a9956820502"
	Nov 19 03:02:36 embed-certs-592123 kubelet[778]: E1119 03:02:36.980038     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dftfl_kubernetes-dashboard(087ebb06-98c5-4966-a059-ba81f8ae1b3d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dftfl" podUID="087ebb06-98c5-4966-a059-ba81f8ae1b3d"
	Nov 19 03:02:39 embed-certs-592123 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 03:02:39 embed-certs-592123 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 03:02:39 embed-certs-592123 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b107b252e23438e4e91346429aa292ef99768ea80e02ebc445b2a52c2f401d41] <==
	2025/11/19 03:02:07 Using namespace: kubernetes-dashboard
	2025/11/19 03:02:07 Using in-cluster config to connect to apiserver
	2025/11/19 03:02:07 Using secret token for csrf signing
	2025/11/19 03:02:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 03:02:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 03:02:07 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 03:02:07 Generating JWE encryption key
	2025/11/19 03:02:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 03:02:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 03:02:07 Initializing JWE encryption key from synchronized object
	2025/11/19 03:02:07 Creating in-cluster Sidecar client
	2025/11/19 03:02:07 Serving insecurely on HTTP port: 9090
	2025/11/19 03:02:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 03:02:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 03:02:07 Starting overwatch
	
	
	==> storage-provisioner [c25eeb92b97431c67651c962c7e0e3bc4fdf21f9c78ccc749a241b682fb1ee70] <==
	I1119 03:01:52.488861       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 03:02:22.497050       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d75fb98c294954a086d7ac0cd21c45155c83c760cc142791e7d4eca8043ba541] <==
	I1119 03:02:23.307918       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 03:02:23.319785       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 03:02:23.319837       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 03:02:23.323062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:26.779393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:31.039497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:34.637980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:37.692109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:40.714346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:40.722512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:02:40.722654       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 03:02:40.722831       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-592123_6ff3290a-7f89-4f14-a8c2-88b2b0dd9106!
	I1119 03:02:40.722886       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18542100-311c-4ccc-932d-a0e1133b54bb", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-592123_6ff3290a-7f89-4f14-a8c2-88b2b0dd9106 became leader
	W1119 03:02:40.730949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:40.736904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:02:40.823541       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-592123_6ff3290a-7f89-4f14-a8c2-88b2b0dd9106!
	W1119 03:02:42.740223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:42.748954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:44.755658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:02:44.780769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-592123 -n embed-certs-592123
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-592123 -n embed-certs-592123: exit status 2 (474.743188ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-592123 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-886248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-886248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (323.509752ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:03:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-886248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-886248
helpers_test.go:243: (dbg) docker inspect newest-cni-886248:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578",
	        "Created": "2025-11-19T03:02:56.888437987Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1666542,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T03:02:57.140379349Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/hosts",
	        "LogPath": "/var/lib/docker/containers/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578-json.log",
	        "Name": "/newest-cni-886248",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-886248:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-886248",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578",
	                "LowerDir": "/var/lib/docker/overlay2/a7a465fbddf49e2c56bd6046cea36a4642d75f6313895eecf81a83070429de04-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a7a465fbddf49e2c56bd6046cea36a4642d75f6313895eecf81a83070429de04/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a7a465fbddf49e2c56bd6046cea36a4642d75f6313895eecf81a83070429de04/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a7a465fbddf49e2c56bd6046cea36a4642d75f6313895eecf81a83070429de04/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-886248",
	                "Source": "/var/lib/docker/volumes/newest-cni-886248/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-886248",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-886248",
	                "name.minikube.sigs.k8s.io": "newest-cni-886248",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c2a23ac397f939d45c113f5cf455707ba05259744115d27278009237a771ae4",
	            "SandboxKey": "/var/run/docker/netns/0c2a23ac397f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34930"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34931"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34934"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34932"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34933"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-886248": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:d4:6a:a2:7d:02",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "40af1a67c106c706985a7e4604847892fa565af460e6b79b193e66105f198b32",
	                    "EndpointID": "bdab707676d1aa0a7b61ab6377a92d7d3a50f33c8db05175175b10a298fe236c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-886248",
	                        "9ceb6de1b4d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-886248 -n newest-cni-886248
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-886248 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-886248 logs -n 25: (1.461285499s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ delete  │ -p old-k8s-version-525469                                                                                                                                                                                                                     │ old-k8s-version-525469       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ delete  │ -p cert-expiration-422184                                                                                                                                                                                                                     │ cert-expiration-422184       │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 02:59 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 02:59 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-579203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-579203 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-592123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p embed-certs-592123 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-579203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ addons  │ enable dashboard -p embed-certs-592123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ image   │ default-k8s-diff-port-579203 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p default-k8s-diff-port-579203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p disable-driver-mounts-722439                                                                                                                                                                                                               │ disable-driver-mounts-722439 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ image   │ embed-certs-592123 image list --format=json                                                                                                                                                                                                   │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p embed-certs-592123 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p embed-certs-592123                                                                                                                                                                                                                         │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p embed-certs-592123                                                                                                                                                                                                                         │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:03 UTC │
	│ addons  │ enable metrics-server -p newest-cni-886248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 03:02:50
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 03:02:50.048079 1665998 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:02:50.048320 1665998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:02:50.048348 1665998 out.go:374] Setting ErrFile to fd 2...
	I1119 03:02:50.048367 1665998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:02:50.048680 1665998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:02:50.049204 1665998 out.go:368] Setting JSON to false
	I1119 03:02:50.050234 1665998 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38697,"bootTime":1763482673,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 03:02:50.050339 1665998 start.go:143] virtualization:  
	I1119 03:02:50.056645 1665998 out.go:179] * [newest-cni-886248] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 03:02:50.060148 1665998 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 03:02:50.060261 1665998 notify.go:221] Checking for updates...
	I1119 03:02:50.066536 1665998 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 03:02:50.069481 1665998 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:02:50.072608 1665998 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 03:02:50.076168 1665998 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 03:02:50.079546 1665998 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 03:02:50.083735 1665998 config.go:182] Loaded profile config "no-preload-800908": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:02:50.083830 1665998 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 03:02:50.124461 1665998 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 03:02:50.124659 1665998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:02:50.227923 1665998 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-19 03:02:50.217681423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:02:50.228027 1665998 docker.go:319] overlay module found
	I1119 03:02:50.231551 1665998 out.go:179] * Using the docker driver based on user configuration
	I1119 03:02:50.235882 1665998 start.go:309] selected driver: docker
	I1119 03:02:50.235908 1665998 start.go:930] validating driver "docker" against <nil>
	I1119 03:02:50.235924 1665998 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 03:02:50.236698 1665998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:02:50.343382 1665998 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-19 03:02:50.33066446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:02:50.343540 1665998 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1119 03:02:50.343573 1665998 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1119 03:02:50.343830 1665998 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 03:02:50.347085 1665998 out.go:179] * Using Docker driver with root privileges
	I1119 03:02:50.350275 1665998 cni.go:84] Creating CNI manager for ""
	I1119 03:02:50.350347 1665998 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:02:50.350356 1665998 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 03:02:50.350439 1665998 start.go:353] cluster config:
	{Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:02:50.353582 1665998 out.go:179] * Starting "newest-cni-886248" primary control-plane node in "newest-cni-886248" cluster
	I1119 03:02:50.356449 1665998 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 03:02:50.359384 1665998 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 03:02:50.362786 1665998 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:02:50.362830 1665998 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 03:02:50.362840 1665998 cache.go:65] Caching tarball of preloaded images
	I1119 03:02:50.362921 1665998 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 03:02:50.362929 1665998 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 03:02:50.363048 1665998 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/config.json ...
	I1119 03:02:50.363065 1665998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/config.json: {Name:mk40368ec2b4a7212655a11be625a45ceb1425a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:02:50.363193 1665998 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 03:02:50.397784 1665998 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 03:02:50.397804 1665998 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 03:02:50.397816 1665998 cache.go:243] Successfully downloaded all kic artifacts
	I1119 03:02:50.397839 1665998 start.go:360] acquireMachinesLock for newest-cni-886248: {Name:mkfb71f15fb61e4b42e0e59e9b569595aaffd1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:02:50.397924 1665998 start.go:364] duration metric: took 70.21µs to acquireMachinesLock for "newest-cni-886248"
	I1119 03:02:50.397947 1665998 start.go:93] Provisioning new machine with config: &{Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:02:50.398012 1665998 start.go:125] createHost starting for "" (driver="docker")
	I1119 03:02:45.579567 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 03:02:45.655098 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 03:02:45.655168 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 03:02:45.655211 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 03:02:45.655277 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 03:02:45.655331 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 03:02:45.655386 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 03:02:45.702561 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 03:02:45.815675 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 03:02:45.815759 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 03:02:45.815813 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 03:02:45.815868 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 03:02:45.815922 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 03:02:45.815972 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 03:02:45.816038 1662687 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 03:02:45.816103 1662687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 03:02:45.978342 1662687 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1119 03:02:45.978448 1662687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1119 03:02:45.978710 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 03:02:45.994561 1662687 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1119 03:02:45.994606 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1119 03:02:45.994677 1662687 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 03:02:45.994764 1662687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 03:02:45.994820 1662687 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 03:02:45.994874 1662687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1119 03:02:45.994924 1662687 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 03:02:45.994971 1662687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 03:02:45.995012 1662687 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1119 03:02:45.995061 1662687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1119 03:02:46.072998 1662687 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1119 03:02:46.073035 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1119 03:02:46.073084 1662687 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1119 03:02:46.073096 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1119 03:02:46.073126 1662687 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1119 03:02:46.073135 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1119 03:02:46.073169 1662687 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1119 03:02:46.073177 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1119 03:02:46.073226 1662687 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 03:02:46.073306 1662687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 03:02:46.082377 1662687 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1119 03:02:46.082418 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1119 03:02:46.167360 1662687 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1119 03:02:46.167418 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	W1119 03:02:46.184947 1662687 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1119 03:02:46.185124 1662687 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:02:46.211082 1662687 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1119 03:02:46.211199 1662687 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1119 03:02:46.645274 1662687 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1119 03:02:46.645316 1662687 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:02:46.645375 1662687 ssh_runner.go:195] Run: which crictl
	I1119 03:02:46.695852 1662687 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1119 03:02:46.759083 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:02:46.917465 1662687 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 03:02:46.917773 1662687 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 03:02:46.986981 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:02:49.373357 1662687 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.455533221s)
	I1119 03:02:49.373386 1662687 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1119 03:02:49.373403 1662687 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1119 03:02:49.373412 1662687 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.386356166s)
	I1119 03:02:49.373447 1662687 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1119 03:02:49.373487 1662687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:02:50.402766 1665998 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 03:02:50.403005 1665998 start.go:159] libmachine.API.Create for "newest-cni-886248" (driver="docker")
	I1119 03:02:50.403042 1665998 client.go:173] LocalClient.Create starting
	I1119 03:02:50.403106 1665998 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem
	I1119 03:02:50.403135 1665998 main.go:143] libmachine: Decoding PEM data...
	I1119 03:02:50.403151 1665998 main.go:143] libmachine: Parsing certificate...
	I1119 03:02:50.403204 1665998 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem
	I1119 03:02:50.403229 1665998 main.go:143] libmachine: Decoding PEM data...
	I1119 03:02:50.403238 1665998 main.go:143] libmachine: Parsing certificate...
	I1119 03:02:50.403563 1665998 cli_runner.go:164] Run: docker network inspect newest-cni-886248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 03:02:50.419977 1665998 cli_runner.go:211] docker network inspect newest-cni-886248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 03:02:50.420059 1665998 network_create.go:284] running [docker network inspect newest-cni-886248] to gather additional debugging logs...
	I1119 03:02:50.420074 1665998 cli_runner.go:164] Run: docker network inspect newest-cni-886248
	W1119 03:02:50.439050 1665998 cli_runner.go:211] docker network inspect newest-cni-886248 returned with exit code 1
	I1119 03:02:50.439078 1665998 network_create.go:287] error running [docker network inspect newest-cni-886248]: docker network inspect newest-cni-886248: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-886248 not found
	I1119 03:02:50.439090 1665998 network_create.go:289] output of [docker network inspect newest-cni-886248]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-886248 not found
	
	** /stderr **
	I1119 03:02:50.439181 1665998 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:02:50.459424 1665998 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-30778cc553ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:62:24:59:d9:05:e6} reservation:<nil>}
	I1119 03:02:50.459778 1665998 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-564f8befa544 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:bb:c9:f1:3d:0c} reservation:<nil>}
	I1119 03:02:50.460000 1665998 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fccf9ce7bac2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:92:9c:a6:ca:f9:d9} reservation:<nil>}
	I1119 03:02:50.460469 1665998 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0cb90}
	I1119 03:02:50.460488 1665998 network_create.go:124] attempt to create docker network newest-cni-886248 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 03:02:50.460545 1665998 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-886248 newest-cni-886248
	I1119 03:02:50.538205 1665998 network_create.go:108] docker network newest-cni-886248 192.168.76.0/24 created
	I1119 03:02:50.538234 1665998 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-886248" container
	I1119 03:02:50.538310 1665998 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 03:02:50.559259 1665998 cli_runner.go:164] Run: docker volume create newest-cni-886248 --label name.minikube.sigs.k8s.io=newest-cni-886248 --label created_by.minikube.sigs.k8s.io=true
	I1119 03:02:50.576807 1665998 oci.go:103] Successfully created a docker volume newest-cni-886248
	I1119 03:02:50.576896 1665998 cli_runner.go:164] Run: docker run --rm --name newest-cni-886248-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-886248 --entrypoint /usr/bin/test -v newest-cni-886248:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1119 03:02:51.382748 1665998 oci.go:107] Successfully prepared a docker volume newest-cni-886248
	I1119 03:02:51.382813 1665998 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:02:51.382823 1665998 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 03:02:51.382895 1665998 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-886248:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 03:02:51.828761 1662687 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.455251693s)
	I1119 03:02:51.828804 1662687 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1119 03:02:51.828887 1662687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1119 03:02:51.829038 1662687 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.455581259s)
	I1119 03:02:51.829053 1662687 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1119 03:02:51.829068 1662687 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 03:02:51.829098 1662687 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 03:02:53.424619 1662687 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.595495162s)
	I1119 03:02:53.424645 1662687 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1119 03:02:53.424663 1662687 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 03:02:53.424713 1662687 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 03:02:53.424781 1662687 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.59587838s)
	I1119 03:02:53.424806 1662687 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1119 03:02:53.424827 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1119 03:02:55.323584 1662687 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.898843568s)
	I1119 03:02:55.323663 1662687 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1119 03:02:55.323707 1662687 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 03:02:55.323799 1662687 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 03:02:56.798644 1665998 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-886248:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.415701686s)
	I1119 03:02:56.798677 1665998 kic.go:203] duration metric: took 5.415850727s to extract preloaded images to volume ...
	W1119 03:02:56.798822 1665998 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1119 03:02:56.798942 1665998 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 03:02:56.873783 1665998 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-886248 --name newest-cni-886248 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-886248 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-886248 --network newest-cni-886248 --ip 192.168.76.2 --volume newest-cni-886248:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1119 03:02:57.398327 1665998 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Running}}
	I1119 03:02:57.438913 1665998 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:02:57.459785 1665998 cli_runner.go:164] Run: docker exec newest-cni-886248 stat /var/lib/dpkg/alternatives/iptables
	I1119 03:02:57.534213 1665998 oci.go:144] the created container "newest-cni-886248" has a running status.
	I1119 03:02:57.534246 1665998 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa...
	I1119 03:02:57.786021 1665998 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 03:02:57.813858 1665998 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:02:57.844490 1665998 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 03:02:57.844513 1665998 kic_runner.go:114] Args: [docker exec --privileged newest-cni-886248 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 03:02:57.908416 1665998 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:02:57.939939 1665998 machine.go:94] provisionDockerMachine start ...
	I1119 03:02:57.940040 1665998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:02:57.974436 1665998 main.go:143] libmachine: Using SSH client type: native
	I1119 03:02:57.974779 1665998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34930 <nil> <nil>}
	I1119 03:02:57.974794 1665998 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 03:02:57.975497 1665998 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42888->127.0.0.1:34930: read: connection reset by peer
	I1119 03:02:56.857048 1662687 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.53320471s)
	I1119 03:02:56.857072 1662687 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1119 03:02:56.857090 1662687 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1119 03:02:56.857138 1662687 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1119 03:03:01.133857 1665998 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-886248
	
	I1119 03:03:01.133943 1665998 ubuntu.go:182] provisioning hostname "newest-cni-886248"
	I1119 03:03:01.134053 1665998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:01.157054 1665998 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:01.157374 1665998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34930 <nil> <nil>}
	I1119 03:03:01.157387 1665998 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-886248 && echo "newest-cni-886248" | sudo tee /etc/hostname
	I1119 03:03:01.316183 1665998 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-886248
	
	I1119 03:03:01.316280 1665998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:01.337285 1665998 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:01.337647 1665998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34930 <nil> <nil>}
	I1119 03:03:01.337681 1665998 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-886248' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-886248/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-886248' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 03:03:01.497762 1665998 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 03:03:01.497789 1665998 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 03:03:01.497812 1665998 ubuntu.go:190] setting up certificates
	I1119 03:03:01.497822 1665998 provision.go:84] configureAuth start
	I1119 03:03:01.497893 1665998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886248
	I1119 03:03:01.522948 1665998 provision.go:143] copyHostCerts
	I1119 03:03:01.523021 1665998 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 03:03:01.523037 1665998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 03:03:01.523112 1665998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 03:03:01.523209 1665998 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 03:03:01.523221 1665998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 03:03:01.523248 1665998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 03:03:01.523305 1665998 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 03:03:01.523314 1665998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 03:03:01.523340 1665998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 03:03:01.523391 1665998 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.newest-cni-886248 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-886248]
	I1119 03:03:01.835653 1665998 provision.go:177] copyRemoteCerts
	I1119 03:03:01.835730 1665998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 03:03:01.835779 1665998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:01.861187 1665998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34930 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:01.974707 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 03:03:02.010904 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 03:03:02.044013 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 03:03:02.090000 1665998 provision.go:87] duration metric: took 592.163116ms to configureAuth
	I1119 03:03:02.090058 1665998 ubuntu.go:206] setting minikube options for container-runtime
	I1119 03:03:02.090272 1665998 config.go:182] Loaded profile config "newest-cni-886248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:03:02.090431 1665998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:02.121114 1665998 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:02.121424 1665998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34930 <nil> <nil>}
	I1119 03:03:02.121445 1665998 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 03:03:02.575841 1665998 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 03:03:02.575874 1665998 machine.go:97] duration metric: took 4.635910708s to provisionDockerMachine
	I1119 03:03:02.575884 1665998 client.go:176] duration metric: took 12.172836153s to LocalClient.Create
	I1119 03:03:02.575899 1665998 start.go:167] duration metric: took 12.172895687s to libmachine.API.Create "newest-cni-886248"
	I1119 03:03:02.575906 1665998 start.go:293] postStartSetup for "newest-cni-886248" (driver="docker")
	I1119 03:03:02.575915 1665998 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 03:03:02.575973 1665998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 03:03:02.576027 1665998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:02.659125 1665998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34930 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:02.767175 1665998 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 03:03:02.771659 1665998 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 03:03:02.771685 1665998 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 03:03:02.771700 1665998 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 03:03:02.771752 1665998 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 03:03:02.771828 1665998 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 03:03:02.771945 1665998 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 03:03:02.783502 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:03:02.807264 1665998 start.go:296] duration metric: took 231.335962ms for postStartSetup
	I1119 03:03:02.807824 1665998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886248
	I1119 03:03:02.839191 1665998 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/config.json ...
	I1119 03:03:02.839469 1665998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 03:03:02.839522 1665998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:02.877365 1665998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34930 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:03.002689 1665998 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 03:03:03.016437 1665998 start.go:128] duration metric: took 12.618410376s to createHost
	I1119 03:03:03.016459 1665998 start.go:83] releasing machines lock for "newest-cni-886248", held for 12.618527969s
	I1119 03:03:03.016749 1665998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886248
	I1119 03:03:03.055144 1665998 ssh_runner.go:195] Run: cat /version.json
	I1119 03:03:03.055193 1665998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:03.055443 1665998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 03:03:03.055514 1665998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:03.126818 1665998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34930 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:03.146341 1665998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34930 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:03.367347 1665998 ssh_runner.go:195] Run: systemctl --version
	I1119 03:03:03.376472 1665998 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 03:03:03.450370 1665998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 03:03:03.457756 1665998 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 03:03:03.457824 1665998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 03:03:03.513311 1665998 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1119 03:03:03.513333 1665998 start.go:496] detecting cgroup driver to use...
	I1119 03:03:03.513366 1665998 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 03:03:03.513415 1665998 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 03:03:03.552725 1665998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 03:03:03.577604 1665998 docker.go:218] disabling cri-docker service (if available) ...
	I1119 03:03:03.577675 1665998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 03:03:03.621314 1665998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 03:03:03.649490 1665998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 03:03:03.823155 1665998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 03:03:04.002526 1665998 docker.go:234] disabling docker service ...
	I1119 03:03:04.002600 1665998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 03:03:04.041833 1665998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 03:03:04.064478 1665998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 03:03:04.241034 1665998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 03:03:04.388425 1665998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 03:03:04.402740 1665998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 03:03:04.416802 1665998 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 03:03:04.416861 1665998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:04.426092 1665998 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 03:03:04.426152 1665998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:04.435104 1665998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:04.443781 1665998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:04.458330 1665998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 03:03:04.466664 1665998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:04.480053 1665998 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:04.494055 1665998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:04.503225 1665998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 03:03:04.511822 1665998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 03:03:04.519901 1665998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:04.661204 1665998 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 03:03:00.955558 1662687 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.098397893s)
	I1119 03:03:00.955583 1662687 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1119 03:03:00.955605 1662687 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1119 03:03:00.955653 1662687 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1119 03:03:01.658773 1662687 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1119 03:03:01.658811 1662687 cache_images.go:125] Successfully loaded all cached images
	I1119 03:03:01.658818 1662687 cache_images.go:94] duration metric: took 16.859723766s to LoadCachedImages
	I1119 03:03:01.658828 1662687 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 03:03:01.658921 1662687 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-800908 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 03:03:01.659075 1662687 ssh_runner.go:195] Run: crio config
	I1119 03:03:01.741281 1662687 cni.go:84] Creating CNI manager for ""
	I1119 03:03:01.741300 1662687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:03:01.741320 1662687 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 03:03:01.741342 1662687 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-800908 NodeName:no-preload-800908 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 03:03:01.741462 1662687 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-800908"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 03:03:01.741566 1662687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 03:03:01.752164 1662687 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1119 03:03:01.752224 1662687 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1119 03:03:01.761259 1662687 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1119 03:03:01.761380 1662687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1119 03:03:01.762402 1662687 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1119 03:03:01.763513 1662687 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1119 03:03:01.769607 1662687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1119 03:03:01.769639 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1119 03:03:02.929201 1662687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:03:02.951001 1662687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1119 03:03:02.959371 1662687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1119 03:03:02.959467 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1119 03:03:03.221609 1662687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1119 03:03:03.238314 1662687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1119 03:03:03.238354 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1119 03:03:03.757319 1662687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 03:03:03.765681 1662687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 03:03:03.780070 1662687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 03:03:03.794175 1662687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 03:03:03.806576 1662687 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 03:03:03.810606 1662687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:03:03.819996 1662687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:03.972615 1662687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:03:03.989091 1662687 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908 for IP: 192.168.85.2
	I1119 03:03:03.989171 1662687 certs.go:195] generating shared ca certs ...
	I1119 03:03:03.989204 1662687 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:03.989390 1662687 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 03:03:03.989469 1662687 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 03:03:03.989516 1662687 certs.go:257] generating profile certs ...
	I1119 03:03:03.989601 1662687 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.key
	I1119 03:03:03.989647 1662687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt with IP's: []
	I1119 03:03:05.936881 1665998 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.275644854s)
	I1119 03:03:05.936906 1665998 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 03:03:05.936957 1665998 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 03:03:05.941845 1665998 start.go:564] Will wait 60s for crictl version
	I1119 03:03:05.941917 1665998 ssh_runner.go:195] Run: which crictl
	I1119 03:03:05.946007 1665998 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 03:03:05.976347 1665998 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 03:03:05.976436 1665998 ssh_runner.go:195] Run: crio --version
	I1119 03:03:06.010855 1665998 ssh_runner.go:195] Run: crio --version
	I1119 03:03:06.051461 1665998 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 03:03:06.054376 1665998 cli_runner.go:164] Run: docker network inspect newest-cni-886248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:03:06.084271 1665998 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 03:03:06.088405 1665998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:03:06.103042 1665998 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 03:03:06.105902 1665998 kubeadm.go:884] updating cluster {Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 03:03:06.106054 1665998 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:03:06.106128 1665998 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:03:06.153566 1665998 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:03:06.153591 1665998 crio.go:433] Images already preloaded, skipping extraction
	I1119 03:03:06.153663 1665998 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:03:06.205554 1665998 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:03:06.205580 1665998 cache_images.go:86] Images are preloaded, skipping loading
	I1119 03:03:06.205592 1665998 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 03:03:06.205698 1665998 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-886248 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 03:03:06.205785 1665998 ssh_runner.go:195] Run: crio config
	I1119 03:03:06.309793 1665998 cni.go:84] Creating CNI manager for ""
	I1119 03:03:06.309816 1665998 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:03:06.309832 1665998 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 03:03:06.309859 1665998 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-886248 NodeName:newest-cni-886248 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 03:03:06.309996 1665998 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-886248"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 03:03:06.310076 1665998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 03:03:06.321043 1665998 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 03:03:06.321114 1665998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 03:03:06.345442 1665998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 03:03:06.378897 1665998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 03:03:06.394959 1665998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1119 03:03:06.409615 1665998 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 03:03:06.413614 1665998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:03:06.423605 1665998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:06.581410 1665998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:03:06.598176 1665998 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248 for IP: 192.168.76.2
	I1119 03:03:06.598196 1665998 certs.go:195] generating shared ca certs ...
	I1119 03:03:06.598213 1665998 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:06.598346 1665998 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 03:03:06.598394 1665998 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 03:03:06.598403 1665998 certs.go:257] generating profile certs ...
	I1119 03:03:06.598456 1665998 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/client.key
	I1119 03:03:06.598469 1665998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/client.crt with IP's: []
	I1119 03:03:07.667616 1665998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/client.crt ...
	I1119 03:03:07.667695 1665998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/client.crt: {Name:mk9af954cdc2d2b9e94e9ec017f5cb73ff632ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:07.667940 1665998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/client.key ...
	I1119 03:03:07.667977 1665998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/client.key: {Name:mk9e6e5aefd7102fae467cbf603c75742a7b27f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:07.668114 1665998 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.key.774757e0
	I1119 03:03:07.668151 1665998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.crt.774757e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 03:03:08.527203 1665998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.crt.774757e0 ...
	I1119 03:03:08.527238 1665998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.crt.774757e0: {Name:mk804ee430352d795e87791345535f1b1cb4d4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:08.527435 1665998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.key.774757e0 ...
	I1119 03:03:08.527451 1665998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.key.774757e0: {Name:mk437c0a35a663da84f181d682060d1b6835a358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:08.527536 1665998 certs.go:382] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.crt.774757e0 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.crt
	I1119 03:03:08.527621 1665998 certs.go:386] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.key.774757e0 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.key
	I1119 03:03:08.527697 1665998 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.key
	I1119 03:03:08.527717 1665998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.crt with IP's: []
	I1119 03:03:08.966072 1665998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.crt ...
	I1119 03:03:08.966107 1665998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.crt: {Name:mkb5b0b913b4f47fc79a6e401a384195007c9837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:08.966335 1665998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.key ...
	I1119 03:03:08.966352 1665998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.key: {Name:mkdeb9284020d4a2f585c2c279eb2bd226636fd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:08.966567 1665998 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 03:03:08.966613 1665998 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 03:03:08.966628 1665998 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 03:03:08.966654 1665998 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 03:03:08.966686 1665998 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 03:03:08.966713 1665998 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 03:03:08.966776 1665998 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:03:08.967371 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 03:03:08.984433 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 03:03:09.005859 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 03:03:09.071271 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 03:03:09.092580 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 03:03:09.111861 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 03:03:09.130513 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 03:03:09.148873 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 03:03:09.167162 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 03:03:09.185894 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 03:03:09.204315 1665998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 03:03:09.222504 1665998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 03:03:09.235805 1665998 ssh_runner.go:195] Run: openssl version
	I1119 03:03:09.242212 1665998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 03:03:09.250939 1665998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 03:03:09.254864 1665998 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 03:03:09.254930 1665998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 03:03:09.309541 1665998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 03:03:09.319139 1665998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 03:03:09.332257 1665998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:09.337201 1665998 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:09.337266 1665998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:09.381382 1665998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 03:03:09.389529 1665998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 03:03:09.400227 1665998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 03:03:09.404108 1665998 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 03:03:09.404228 1665998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 03:03:09.445626 1665998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 03:03:09.454454 1665998 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 03:03:09.458414 1665998 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 03:03:09.458466 1665998 kubeadm.go:401] StartCluster: {Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:03:09.458546 1665998 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 03:03:09.458605 1665998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 03:03:09.500208 1665998 cri.go:89] found id: ""
	I1119 03:03:09.500285 1665998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 03:03:09.509698 1665998 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 03:03:09.518124 1665998 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 03:03:09.518187 1665998 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 03:03:09.533795 1665998 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 03:03:09.533815 1665998 kubeadm.go:158] found existing configuration files:
	
	I1119 03:03:09.533868 1665998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 03:03:09.542466 1665998 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 03:03:09.542531 1665998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 03:03:09.550078 1665998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 03:03:09.558451 1665998 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 03:03:09.558520 1665998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 03:03:09.566385 1665998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 03:03:09.574555 1665998 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 03:03:09.574615 1665998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 03:03:09.582128 1665998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 03:03:09.590591 1665998 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 03:03:09.590683 1665998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 03:03:09.598312 1665998 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 03:03:09.646903 1665998 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 03:03:09.647222 1665998 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 03:03:09.677129 1665998 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 03:03:09.677209 1665998 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 03:03:09.677261 1665998 kubeadm.go:319] OS: Linux
	I1119 03:03:09.677313 1665998 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 03:03:09.677369 1665998 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 03:03:09.677429 1665998 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 03:03:09.677484 1665998 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 03:03:09.677558 1665998 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 03:03:09.677615 1665998 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 03:03:09.677667 1665998 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 03:03:09.677722 1665998 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 03:03:09.677774 1665998 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 03:03:09.756783 1665998 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 03:03:09.756902 1665998 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 03:03:09.757004 1665998 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 03:03:09.813869 1665998 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 03:03:09.821310 1665998 out.go:252]   - Generating certificates and keys ...
	I1119 03:03:09.821407 1665998 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 03:03:09.821482 1665998 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 03:03:05.638695 1662687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt ...
	I1119 03:03:05.638728 1662687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt: {Name:mk03f5bd332551c50f180e5f0a679fe365e14a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:05.638922 1662687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.key ...
	I1119 03:03:05.638937 1662687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.key: {Name:mk8ae97812e20bcd47ef3df737d387d498981688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:05.639036 1662687 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.key.a073045a
	I1119 03:03:05.639058 1662687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.crt.a073045a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 03:03:06.525358 1662687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.crt.a073045a ...
	I1119 03:03:06.525392 1662687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.crt.a073045a: {Name:mk5ecd888882f45a502aba50a599c52c332eb91c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:06.525591 1662687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.key.a073045a ...
	I1119 03:03:06.525608 1662687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.key.a073045a: {Name:mk5dbed2ad518caa427beae2332e535b32c981a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:06.525693 1662687 certs.go:382] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.crt.a073045a -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.crt
	I1119 03:03:06.525775 1662687 certs.go:386] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.key.a073045a -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.key
	I1119 03:03:06.525838 1662687 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.key
	I1119 03:03:06.525857 1662687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.crt with IP's: []
	I1119 03:03:07.888506 1662687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.crt ...
	I1119 03:03:07.888537 1662687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.crt: {Name:mke3343c906a73a48d4fb4b60d26084c1b577530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:07.888755 1662687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.key ...
	I1119 03:03:07.888771 1662687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.key: {Name:mkd09482d9f21d3420ba24641968978f07bce4ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:07.888982 1662687 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 03:03:07.889026 1662687 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 03:03:07.889039 1662687 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 03:03:07.889068 1662687 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 03:03:07.889096 1662687 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 03:03:07.889121 1662687 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 03:03:07.889170 1662687 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:03:07.889778 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 03:03:07.907117 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 03:03:07.924883 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 03:03:07.942765 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 03:03:07.961405 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 03:03:07.981526 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 03:03:08.001709 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 03:03:08.021673 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 03:03:08.038860 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 03:03:08.058539 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 03:03:08.077133 1662687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 03:03:08.095726 1662687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 03:03:08.109197 1662687 ssh_runner.go:195] Run: openssl version
	I1119 03:03:08.117891 1662687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 03:03:08.127386 1662687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:08.131193 1662687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:08.131270 1662687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:08.173613 1662687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 03:03:08.182394 1662687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 03:03:08.190800 1662687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 03:03:08.194578 1662687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 03:03:08.194661 1662687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 03:03:08.236214 1662687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 03:03:08.244831 1662687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 03:03:08.253159 1662687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 03:03:08.256644 1662687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 03:03:08.256719 1662687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 03:03:08.323631 1662687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 03:03:08.337463 1662687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 03:03:08.344242 1662687 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 03:03:08.344294 1662687 kubeadm.go:401] StartCluster: {Name:no-preload-800908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:03:08.344374 1662687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 03:03:08.344440 1662687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 03:03:08.398337 1662687 cri.go:89] found id: ""
	I1119 03:03:08.398440 1662687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 03:03:08.408282 1662687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 03:03:08.416499 1662687 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 03:03:08.416569 1662687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 03:03:08.425780 1662687 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 03:03:08.425799 1662687 kubeadm.go:158] found existing configuration files:
	
	I1119 03:03:08.425851 1662687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 03:03:08.434063 1662687 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 03:03:08.434161 1662687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 03:03:08.441881 1662687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 03:03:08.449981 1662687 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 03:03:08.450062 1662687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 03:03:08.457333 1662687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 03:03:08.465120 1662687 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 03:03:08.465199 1662687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 03:03:08.472493 1662687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 03:03:08.480391 1662687 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 03:03:08.480470 1662687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 03:03:08.487962 1662687 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 03:03:08.597561 1662687 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 03:03:08.597805 1662687 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 03:03:08.686644 1662687 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 03:03:10.467069 1665998 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 03:03:10.978789 1665998 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 03:03:11.654705 1665998 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 03:03:11.916374 1665998 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 03:03:12.693868 1665998 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 03:03:12.694018 1665998 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-886248] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 03:03:12.822035 1665998 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 03:03:12.822559 1665998 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-886248] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 03:03:12.899975 1665998 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 03:03:13.171511 1665998 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 03:03:13.573911 1665998 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 03:03:13.574379 1665998 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 03:03:13.866988 1665998 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 03:03:14.867494 1665998 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 03:03:16.415503 1665998 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 03:03:16.461880 1665998 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 03:03:17.656664 1665998 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 03:03:17.657834 1665998 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 03:03:17.660765 1665998 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 03:03:17.664533 1665998 out.go:252]   - Booting up control plane ...
	I1119 03:03:17.664705 1665998 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 03:03:17.664861 1665998 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 03:03:17.670745 1665998 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 03:03:17.689984 1665998 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 03:03:17.690350 1665998 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 03:03:17.698545 1665998 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 03:03:17.699493 1665998 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 03:03:17.699545 1665998 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 03:03:17.849973 1665998 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 03:03:17.850101 1665998 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 03:03:19.853882 1665998 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001536264s
	I1119 03:03:19.855294 1665998 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 03:03:19.855668 1665998 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 03:03:19.856525 1665998 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 03:03:19.856882 1665998 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 03:03:27.392795 1665998 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.535473285s
	I1119 03:03:31.583215 1662687 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 03:03:31.583278 1662687 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 03:03:31.583379 1662687 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 03:03:31.583442 1662687 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 03:03:31.583488 1662687 kubeadm.go:319] OS: Linux
	I1119 03:03:31.583543 1662687 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 03:03:31.583593 1662687 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 03:03:31.583644 1662687 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 03:03:31.583694 1662687 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 03:03:31.583745 1662687 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 03:03:31.583795 1662687 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 03:03:31.583853 1662687 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 03:03:31.583904 1662687 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 03:03:31.583953 1662687 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 03:03:31.584027 1662687 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 03:03:31.584141 1662687 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 03:03:31.584242 1662687 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 03:03:31.584311 1662687 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 03:03:31.587392 1662687 out.go:252]   - Generating certificates and keys ...
	I1119 03:03:31.587508 1662687 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 03:03:31.587589 1662687 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 03:03:31.587662 1662687 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 03:03:31.587727 1662687 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 03:03:31.587797 1662687 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 03:03:31.587852 1662687 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 03:03:31.587910 1662687 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 03:03:31.588034 1662687 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-800908] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 03:03:31.588094 1662687 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 03:03:31.588232 1662687 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-800908] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 03:03:31.588301 1662687 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 03:03:31.588376 1662687 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 03:03:31.588433 1662687 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 03:03:31.588492 1662687 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 03:03:31.588551 1662687 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 03:03:31.588615 1662687 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 03:03:31.588679 1662687 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 03:03:31.588750 1662687 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 03:03:31.588808 1662687 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 03:03:31.588896 1662687 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 03:03:31.588969 1662687 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 03:03:31.592095 1662687 out.go:252]   - Booting up control plane ...
	I1119 03:03:31.592235 1662687 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 03:03:31.592346 1662687 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 03:03:31.592441 1662687 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 03:03:31.592581 1662687 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 03:03:31.592711 1662687 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 03:03:31.592846 1662687 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 03:03:31.592971 1662687 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 03:03:31.593025 1662687 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 03:03:31.593188 1662687 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 03:03:31.593319 1662687 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 03:03:31.593389 1662687 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.501296208s
	I1119 03:03:31.593500 1662687 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 03:03:31.593620 1662687 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 03:03:31.593727 1662687 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 03:03:31.593827 1662687 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 03:03:31.593926 1662687 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 8.144514144s
	I1119 03:03:31.594021 1662687 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.051488468s
	I1119 03:03:31.594105 1662687 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 12.002523624s
	I1119 03:03:31.594237 1662687 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 03:03:31.594380 1662687 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 03:03:31.594480 1662687 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 03:03:31.594701 1662687 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-800908 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 03:03:31.594786 1662687 kubeadm.go:319] [bootstrap-token] Using token: 4jix7y.tkxznad7tv269avz
	I1119 03:03:31.597749 1662687 out.go:252]   - Configuring RBAC rules ...
	I1119 03:03:31.597912 1662687 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 03:03:31.598026 1662687 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 03:03:31.598208 1662687 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 03:03:31.598366 1662687 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 03:03:31.598506 1662687 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 03:03:31.598631 1662687 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 03:03:31.598813 1662687 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 03:03:31.598877 1662687 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 03:03:31.598944 1662687 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 03:03:31.598953 1662687 kubeadm.go:319] 
	I1119 03:03:31.599033 1662687 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 03:03:31.599043 1662687 kubeadm.go:319] 
	I1119 03:03:31.599143 1662687 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 03:03:31.599153 1662687 kubeadm.go:319] 
	I1119 03:03:31.599198 1662687 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 03:03:31.599273 1662687 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 03:03:31.599334 1662687 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 03:03:31.599342 1662687 kubeadm.go:319] 
	I1119 03:03:31.599403 1662687 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 03:03:31.599413 1662687 kubeadm.go:319] 
	I1119 03:03:31.599470 1662687 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 03:03:31.599478 1662687 kubeadm.go:319] 
	I1119 03:03:31.599545 1662687 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 03:03:31.599646 1662687 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 03:03:31.599725 1662687 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 03:03:31.599733 1662687 kubeadm.go:319] 
	I1119 03:03:31.599835 1662687 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 03:03:31.599938 1662687 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 03:03:31.599949 1662687 kubeadm.go:319] 
	I1119 03:03:31.600062 1662687 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4jix7y.tkxznad7tv269avz \
	I1119 03:03:31.600182 1662687 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a \
	I1119 03:03:31.600209 1662687 kubeadm.go:319] 	--control-plane 
	I1119 03:03:31.600217 1662687 kubeadm.go:319] 
	I1119 03:03:31.600312 1662687 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 03:03:31.600319 1662687 kubeadm.go:319] 
	I1119 03:03:31.600418 1662687 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4jix7y.tkxznad7tv269avz \
	I1119 03:03:31.600566 1662687 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a 
	I1119 03:03:31.600578 1662687 cni.go:84] Creating CNI manager for ""
	I1119 03:03:31.600599 1662687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:03:31.603840 1662687 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 03:03:30.426994 1665998 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.569377475s
	I1119 03:03:32.357988 1665998 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 12.501809798s
	I1119 03:03:32.411302 1665998 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 03:03:32.444403 1665998 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 03:03:32.467952 1665998 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 03:03:32.468559 1665998 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-886248 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 03:03:32.495794 1665998 kubeadm.go:319] [bootstrap-token] Using token: ud21n0.jf8shw48wyomlaxa
	I1119 03:03:32.500953 1665998 out.go:252]   - Configuring RBAC rules ...
	I1119 03:03:32.501199 1665998 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 03:03:32.513584 1665998 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 03:03:32.535629 1665998 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 03:03:32.541260 1665998 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 03:03:32.553159 1665998 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 03:03:32.564194 1665998 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 03:03:32.766576 1665998 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 03:03:33.208101 1665998 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 03:03:33.766320 1665998 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 03:03:33.767326 1665998 kubeadm.go:319] 
	I1119 03:03:33.767431 1665998 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 03:03:33.767446 1665998 kubeadm.go:319] 
	I1119 03:03:33.767528 1665998 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 03:03:33.767539 1665998 kubeadm.go:319] 
	I1119 03:03:33.767566 1665998 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 03:03:33.767653 1665998 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 03:03:33.767729 1665998 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 03:03:33.767741 1665998 kubeadm.go:319] 
	I1119 03:03:33.767799 1665998 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 03:03:33.767809 1665998 kubeadm.go:319] 
	I1119 03:03:33.767865 1665998 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 03:03:33.767874 1665998 kubeadm.go:319] 
	I1119 03:03:33.767928 1665998 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 03:03:33.768065 1665998 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 03:03:33.768156 1665998 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 03:03:33.768166 1665998 kubeadm.go:319] 
	I1119 03:03:33.768299 1665998 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 03:03:33.768440 1665998 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 03:03:33.768459 1665998 kubeadm.go:319] 
	I1119 03:03:33.768558 1665998 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ud21n0.jf8shw48wyomlaxa \
	I1119 03:03:33.768712 1665998 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a \
	I1119 03:03:33.768772 1665998 kubeadm.go:319] 	--control-plane 
	I1119 03:03:33.768790 1665998 kubeadm.go:319] 
	I1119 03:03:33.768921 1665998 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 03:03:33.768946 1665998 kubeadm.go:319] 
	I1119 03:03:33.769070 1665998 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ud21n0.jf8shw48wyomlaxa \
	I1119 03:03:33.769203 1665998 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a 
	I1119 03:03:33.773646 1665998 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 03:03:33.773949 1665998 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 03:03:33.774129 1665998 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 03:03:33.774159 1665998 cni.go:84] Creating CNI manager for ""
	I1119 03:03:33.774193 1665998 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:03:33.779050 1665998 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 03:03:33.781943 1665998 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 03:03:33.786995 1665998 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 03:03:33.787076 1665998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 03:03:33.800410 1665998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 03:03:34.177735 1665998 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 03:03:34.177864 1665998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:34.177944 1665998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-886248 minikube.k8s.io/updated_at=2025_11_19T03_03_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=newest-cni-886248 minikube.k8s.io/primary=true
	I1119 03:03:34.389704 1665998 ops.go:34] apiserver oom_adj: -16
	I1119 03:03:34.389886 1665998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:34.889992 1665998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:31.606731 1662687 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 03:03:31.610889 1662687 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 03:03:31.610907 1662687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 03:03:31.634144 1662687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 03:03:32.051768 1662687 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 03:03:32.052395 1662687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-800908 minikube.k8s.io/updated_at=2025_11_19T03_03_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=no-preload-800908 minikube.k8s.io/primary=true
	I1119 03:03:32.052579 1662687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:32.460216 1662687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:32.460287 1662687 ops.go:34] apiserver oom_adj: -16
	I1119 03:03:32.960401 1662687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:33.460875 1662687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:33.960880 1662687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:34.460838 1662687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:34.960551 1662687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:35.460924 1662687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:35.960706 1662687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:36.460930 1662687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:36.585457 1662687 kubeadm.go:1114] duration metric: took 4.533418662s to wait for elevateKubeSystemPrivileges
	I1119 03:03:36.585483 1662687 kubeadm.go:403] duration metric: took 28.241193518s to StartCluster
	I1119 03:03:36.585500 1662687 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:36.585600 1662687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:03:36.586240 1662687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:36.586461 1662687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 03:03:36.586472 1662687 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:03:36.586714 1662687 config.go:182] Loaded profile config "no-preload-800908": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:03:36.586760 1662687 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:03:36.586824 1662687 addons.go:70] Setting storage-provisioner=true in profile "no-preload-800908"
	I1119 03:03:36.586840 1662687 addons.go:239] Setting addon storage-provisioner=true in "no-preload-800908"
	I1119 03:03:36.586863 1662687 host.go:66] Checking if "no-preload-800908" exists ...
	I1119 03:03:36.587335 1662687 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:03:36.587859 1662687 addons.go:70] Setting default-storageclass=true in profile "no-preload-800908"
	I1119 03:03:36.587880 1662687 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-800908"
	I1119 03:03:36.588169 1662687 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:03:36.589995 1662687 out.go:179] * Verifying Kubernetes components...
	I1119 03:03:36.593893 1662687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:36.618958 1662687 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:03:35.390847 1665998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:35.890459 1665998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:36.390072 1665998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:36.890645 1665998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:37.390280 1665998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:37.890459 1665998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:03:38.116461 1665998 kubeadm.go:1114] duration metric: took 3.938642243s to wait for elevateKubeSystemPrivileges
	I1119 03:03:38.116493 1665998 kubeadm.go:403] duration metric: took 28.658030982s to StartCluster
	I1119 03:03:38.116510 1665998 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:38.116572 1665998 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:03:38.117641 1665998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:38.117858 1665998 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:03:38.117949 1665998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 03:03:38.118230 1665998 config.go:182] Loaded profile config "newest-cni-886248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:03:38.118274 1665998 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:03:38.118335 1665998 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-886248"
	I1119 03:03:38.118348 1665998 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-886248"
	I1119 03:03:38.118369 1665998 host.go:66] Checking if "newest-cni-886248" exists ...
	I1119 03:03:38.119100 1665998 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:38.119567 1665998 addons.go:70] Setting default-storageclass=true in profile "newest-cni-886248"
	I1119 03:03:38.119584 1665998 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-886248"
	I1119 03:03:38.120041 1665998 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:38.121560 1665998 out.go:179] * Verifying Kubernetes components...
	I1119 03:03:38.125333 1665998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:38.172556 1665998 addons.go:239] Setting addon default-storageclass=true in "newest-cni-886248"
	I1119 03:03:38.172596 1665998 host.go:66] Checking if "newest-cni-886248" exists ...
	I1119 03:03:38.173034 1665998 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:38.193637 1665998 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:03:36.625170 1662687 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:03:36.625201 1662687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:03:36.625273 1662687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:03:36.645524 1662687 addons.go:239] Setting addon default-storageclass=true in "no-preload-800908"
	I1119 03:03:36.645568 1662687 host.go:66] Checking if "no-preload-800908" exists ...
	I1119 03:03:36.645990 1662687 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:03:36.686450 1662687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34925 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:03:36.706457 1662687 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:03:36.706484 1662687 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:03:36.706544 1662687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:03:36.740867 1662687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34925 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:03:37.101451 1662687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:03:37.143950 1662687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 03:03:37.144129 1662687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:03:37.272475 1662687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:03:39.030105 1662687 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.928525181s)
	I1119 03:03:39.030157 1662687 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.885997571s)
	I1119 03:03:39.031042 1662687 node_ready.go:35] waiting up to 6m0s for node "no-preload-800908" to be "Ready" ...
	I1119 03:03:39.031366 1662687 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.887342961s)
	I1119 03:03:39.031384 1662687 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 03:03:39.032468 1662687 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.759913242s)
	I1119 03:03:39.129652 1662687 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 03:03:38.196864 1665998 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:03:38.196889 1665998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:03:38.196955 1665998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:38.209076 1665998 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:03:38.209098 1665998 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:03:38.209173 1665998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:38.241840 1665998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34930 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:38.262778 1665998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34930 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:38.592007 1665998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:03:38.937341 1665998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:03:39.146946 1665998 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.028972236s)
	I1119 03:03:39.147091 1665998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 03:03:39.147161 1665998 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.021810267s)
	I1119 03:03:39.147211 1665998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:03:40.215200 1665998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.623157324s)
	I1119 03:03:40.215319 1665998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.277953703s)
	I1119 03:03:40.276332 1665998 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 03:03:39.132492 1662687 addons.go:515] duration metric: took 2.545719819s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 03:03:39.535743 1662687 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-800908" context rescaled to 1 replicas
	I1119 03:03:40.280139 1665998 addons.go:515] duration metric: took 2.161849038s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 03:03:40.305068 1665998 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.157837326s)
	I1119 03:03:40.305962 1665998 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:03:40.306013 1665998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:03:40.306089 1665998 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.158984092s)
	I1119 03:03:40.306102 1665998 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 03:03:40.328228 1665998 api_server.go:72] duration metric: took 2.210333245s to wait for apiserver process to appear ...
	I1119 03:03:40.328302 1665998 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:03:40.328334 1665998 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 03:03:40.350181 1665998 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 03:03:40.353811 1665998 api_server.go:141] control plane version: v1.34.1
	I1119 03:03:40.353880 1665998 api_server.go:131] duration metric: took 25.557282ms to wait for apiserver health ...
	I1119 03:03:40.353903 1665998 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:03:40.360979 1665998 system_pods.go:59] 9 kube-system pods found
	I1119 03:03:40.361076 1665998 system_pods.go:61] "coredns-66bc5c9577-jckcv" [26053b15-2941-45c2-a847-ba113e2897ee] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 03:03:40.361106 1665998 system_pods.go:61] "coredns-66bc5c9577-wh5wb" [92363de0-8e50-45e7-84f7-8d0e20fa6d64] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 03:03:40.361143 1665998 system_pods.go:61] "etcd-newest-cni-886248" [5dc760bc-b71b-4b72-b27d-abf96ba66665] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:03:40.361172 1665998 system_pods.go:61] "kindnet-wbjgj" [baa5b1cf-5f4f-4ca9-959c-af74d9f62f83] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 03:03:40.361192 1665998 system_pods.go:61] "kube-apiserver-newest-cni-886248" [f48c4478-6515-4447-a2d8-bc8683421e68] Running
	I1119 03:03:40.361214 1665998 system_pods.go:61] "kube-controller-manager-newest-cni-886248" [78d87a76-a5af-4b59-9688-1f684aa4eb86] Running
	I1119 03:03:40.361246 1665998 system_pods.go:61] "kube-proxy-kn684" [f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 03:03:40.361266 1665998 system_pods.go:61] "kube-scheduler-newest-cni-886248" [9d4bee4f-21a5-4c71-9174-885f35f536ac] Running
	I1119 03:03:40.361285 1665998 system_pods.go:61] "storage-provisioner" [4b774a63-0385-4354-91d0-0f4824a9a758] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 03:03:40.361307 1665998 system_pods.go:74] duration metric: took 7.382935ms to wait for pod list to return data ...
	I1119 03:03:40.361327 1665998 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:03:40.373668 1665998 default_sa.go:45] found service account: "default"
	I1119 03:03:40.373736 1665998 default_sa.go:55] duration metric: took 12.382135ms for default service account to be created ...
	I1119 03:03:40.373764 1665998 kubeadm.go:587] duration metric: took 2.255875013s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 03:03:40.373813 1665998 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:03:40.388335 1665998 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:03:40.388419 1665998 node_conditions.go:123] node cpu capacity is 2
	I1119 03:03:40.388448 1665998 node_conditions.go:105] duration metric: took 14.612488ms to run NodePressure ...
	I1119 03:03:40.388486 1665998 start.go:242] waiting for startup goroutines ...
	I1119 03:03:40.810236 1665998 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-886248" context rescaled to 1 replicas
	I1119 03:03:40.810324 1665998 start.go:247] waiting for cluster config update ...
	I1119 03:03:40.810352 1665998 start.go:256] writing updated cluster config ...
	I1119 03:03:40.810679 1665998 ssh_runner.go:195] Run: rm -f paused
	I1119 03:03:40.907260 1665998 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:03:40.912552 1665998 out.go:179] * Done! kubectl is now configured to use "newest-cni-886248" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 03:03:39 newest-cni-886248 crio[846]: time="2025-11-19T03:03:39.553834168Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:39 newest-cni-886248 crio[846]: time="2025-11-19T03:03:39.564897465Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c7c0acc2-6b6e-43d1-8a98-7745e9c8d25f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:39 newest-cni-886248 crio[846]: time="2025-11-19T03:03:39.587564918Z" level=info msg="Ran pod sandbox 912c0b1b4f48e48ac5711c5fa5288fba46e1be50cb9bcc620622d7ad37c2f855 with infra container: kube-system/kindnet-wbjgj/POD" id=c7c0acc2-6b6e-43d1-8a98-7745e9c8d25f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:39 newest-cni-886248 crio[846]: time="2025-11-19T03:03:39.590801617Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d4c8812b-993d-416b-92e7-b49b462c3529 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:39 newest-cni-886248 crio[846]: time="2025-11-19T03:03:39.593595104Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3613df97-6122-47d6-9b50-6d77bc485975 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:39 newest-cni-886248 crio[846]: time="2025-11-19T03:03:39.600566039Z" level=info msg="Creating container: kube-system/kindnet-wbjgj/kindnet-cni" id=70f71a31-983e-4730-8003-8458397cd747 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:39 newest-cni-886248 crio[846]: time="2025-11-19T03:03:39.60183495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:39 newest-cni-886248 crio[846]: time="2025-11-19T03:03:39.606370426Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:39 newest-cni-886248 crio[846]: time="2025-11-19T03:03:39.606848665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:39 newest-cni-886248 crio[846]: time="2025-11-19T03:03:39.653659648Z" level=info msg="Created container c89d2d32905ce72ca83e136fb45a84f991f2b931e91c1dffc2e0ed955fa67f6e: kube-system/kindnet-wbjgj/kindnet-cni" id=70f71a31-983e-4730-8003-8458397cd747 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:39 newest-cni-886248 crio[846]: time="2025-11-19T03:03:39.655649457Z" level=info msg="Starting container: c89d2d32905ce72ca83e136fb45a84f991f2b931e91c1dffc2e0ed955fa67f6e" id=b1fd3f59-2a63-4625-a827-da556ce950c7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:03:39 newest-cni-886248 crio[846]: time="2025-11-19T03:03:39.662462825Z" level=info msg="Started container" PID=1488 containerID=c89d2d32905ce72ca83e136fb45a84f991f2b931e91c1dffc2e0ed955fa67f6e description=kube-system/kindnet-wbjgj/kindnet-cni id=b1fd3f59-2a63-4625-a827-da556ce950c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=912c0b1b4f48e48ac5711c5fa5288fba46e1be50cb9bcc620622d7ad37c2f855
	Nov 19 03:03:40 newest-cni-886248 crio[846]: time="2025-11-19T03:03:40.590233751Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-kn684/POD" id=80c77bf5-9854-45c9-ba16-0f8a411a4c2e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:40 newest-cni-886248 crio[846]: time="2025-11-19T03:03:40.590294722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:40 newest-cni-886248 crio[846]: time="2025-11-19T03:03:40.595709787Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=80c77bf5-9854-45c9-ba16-0f8a411a4c2e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:40 newest-cni-886248 crio[846]: time="2025-11-19T03:03:40.599869258Z" level=info msg="Ran pod sandbox e4238022febd3f011cbe43884d427bdf65990262e36b7686a262b6f22400dd0d with infra container: kube-system/kube-proxy-kn684/POD" id=80c77bf5-9854-45c9-ba16-0f8a411a4c2e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:40 newest-cni-886248 crio[846]: time="2025-11-19T03:03:40.604140126Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c2eddfd6-5271-4696-9ce4-6c7f45b5893c name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:40 newest-cni-886248 crio[846]: time="2025-11-19T03:03:40.611225125Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=36a2a21e-1084-440c-b084-7f3bea0cd7e9 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:40 newest-cni-886248 crio[846]: time="2025-11-19T03:03:40.618892901Z" level=info msg="Creating container: kube-system/kube-proxy-kn684/kube-proxy" id=0e78dc71-4ac4-4024-aee8-6aa2b0866cdb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:40 newest-cni-886248 crio[846]: time="2025-11-19T03:03:40.619001493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:40 newest-cni-886248 crio[846]: time="2025-11-19T03:03:40.628738929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:40 newest-cni-886248 crio[846]: time="2025-11-19T03:03:40.629261695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:40 newest-cni-886248 crio[846]: time="2025-11-19T03:03:40.655812844Z" level=info msg="Created container 72e27cdd6ca2f79246ac8c9fda849eed4a1734f07a65bfcf0fdde9bb94f8f7c3: kube-system/kube-proxy-kn684/kube-proxy" id=0e78dc71-4ac4-4024-aee8-6aa2b0866cdb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:40 newest-cni-886248 crio[846]: time="2025-11-19T03:03:40.657791553Z" level=info msg="Starting container: 72e27cdd6ca2f79246ac8c9fda849eed4a1734f07a65bfcf0fdde9bb94f8f7c3" id=214d7992-cebc-41aa-b4d8-599792e6e360 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:03:40 newest-cni-886248 crio[846]: time="2025-11-19T03:03:40.668109074Z" level=info msg="Started container" PID=1542 containerID=72e27cdd6ca2f79246ac8c9fda849eed4a1734f07a65bfcf0fdde9bb94f8f7c3 description=kube-system/kube-proxy-kn684/kube-proxy id=214d7992-cebc-41aa-b4d8-599792e6e360 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e4238022febd3f011cbe43884d427bdf65990262e36b7686a262b6f22400dd0d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	72e27cdd6ca2f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   e4238022febd3       kube-proxy-kn684                            kube-system
	c89d2d32905ce       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   3 seconds ago       Running             kindnet-cni               0                   912c0b1b4f48e       kindnet-wbjgj                               kube-system
	31d55e897c892       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago      Running             kube-controller-manager   0                   6dc35d28d1483       kube-controller-manager-newest-cni-886248   kube-system
	e1a362cbc4102       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago      Running             kube-apiserver            0                   e5f0f570d8faa       kube-apiserver-newest-cni-886248            kube-system
	de617b47783b6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago      Running             etcd                      0                   dae814d454252       etcd-newest-cni-886248                      kube-system
	6862ca3807782       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago      Running             kube-scheduler            0                   bec57ae4ba6d3       kube-scheduler-newest-cni-886248            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-886248
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-886248
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=newest-cni-886248
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T03_03_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 03:03:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-886248
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 03:03:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 03:03:33 +0000   Wed, 19 Nov 2025 03:03:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 03:03:33 +0000   Wed, 19 Nov 2025 03:03:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 03:03:33 +0000   Wed, 19 Nov 2025 03:03:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 19 Nov 2025 03:03:33 +0000   Wed, 19 Nov 2025 03:03:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-886248
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                aa6cbc50-f2b0-4528-80c3-566034a2d86c
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-886248                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-wbjgj                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-886248             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-886248    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-kn684                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-886248             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node newest-cni-886248 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node newest-cni-886248 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x8 over 23s)  kubelet          Node newest-cni-886248 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-886248 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-886248 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-886248 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-886248 event: Registered Node newest-cni-886248 in Controller
	
	
	==> dmesg <==
	[Nov19 02:41] overlayfs: idmapped layers are currently not supported
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	[Nov19 03:00] overlayfs: idmapped layers are currently not supported
	[  +8.385032] overlayfs: idmapped layers are currently not supported
	[Nov19 03:01] overlayfs: idmapped layers are currently not supported
	[  +9.842210] overlayfs: idmapped layers are currently not supported
	[Nov19 03:02] overlayfs: idmapped layers are currently not supported
	[Nov19 03:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [de617b47783b6fb4fc6137a6621a1b675d09d2aa3d6ad82214c1279e64dad3f0] <==
	{"level":"warn","ts":"2025-11-19T03:03:28.732470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:28.760763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:28.778368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:28.793944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:28.807242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:28.822383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:28.836918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:28.852838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:28.885994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:28.896205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:28.936913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:28.953381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:29.052461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43908","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T03:03:39.092013Z","caller":"traceutil/trace.go:172","msg":"trace[697608034] linearizableReadLoop","detail":"{readStateIndex:387; appliedIndex:387; }","duration":"101.976425ms","start":"2025-11-19T03:03:38.990011Z","end":"2025-11-19T03:03:39.091987Z","steps":["trace[697608034] 'read index received'  (duration: 101.971773ms)","trace[697608034] 'applied index is now lower than readState.Index'  (duration: 3.823µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T03:03:39.110504Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.473839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-11-19T03:03:39.110576Z","caller":"traceutil/trace.go:172","msg":"trace[60244362] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:377; }","duration":"120.54217ms","start":"2025-11-19T03:03:38.990006Z","end":"2025-11-19T03:03:39.110548Z","steps":["trace[60244362] 'agreement among raft nodes before linearized reading'  (duration: 119.972808ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T03:03:39.116873Z","caller":"traceutil/trace.go:172","msg":"trace[578214177] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"143.384518ms","start":"2025-11-19T03:03:38.973469Z","end":"2025-11-19T03:03:39.116853Z","steps":["trace[578214177] 'process raft request'  (duration: 110.295514ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T03:03:39.143536Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.794667ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-wbjgj\" limit:1 ","response":"range_response_count:1 size:3702"}
	{"level":"info","ts":"2025-11-19T03:03:39.143606Z","caller":"traceutil/trace.go:172","msg":"trace[689623890] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-wbjgj; range_end:; response_count:1; response_revision:378; }","duration":"118.862087ms","start":"2025-11-19T03:03:39.024718Z","end":"2025-11-19T03:03:39.143580Z","steps":["trace[689623890] 'agreement among raft nodes before linearized reading'  (duration: 118.710025ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T03:03:39.150715Z","caller":"traceutil/trace.go:172","msg":"trace[265341946] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"125.354282ms","start":"2025-11-19T03:03:39.025343Z","end":"2025-11-19T03:03:39.150697Z","steps":["trace[265341946] 'process raft request'  (duration: 125.294222ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T03:03:39.156132Z","caller":"traceutil/trace.go:172","msg":"trace[1068032225] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"130.931329ms","start":"2025-11-19T03:03:39.025189Z","end":"2025-11-19T03:03:39.156121Z","steps":["trace[1068032225] 'process raft request'  (duration: 118.819783ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T03:03:39.151633Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.826954ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-11-19T03:03:39.151928Z","caller":"traceutil/trace.go:172","msg":"trace[167652080] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"126.99984ms","start":"2025-11-19T03:03:39.024889Z","end":"2025-11-19T03:03:39.151889Z","steps":["trace[167652080] 'process raft request'  (duration: 119.085285ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T03:03:39.155700Z","caller":"traceutil/trace.go:172","msg":"trace[1237855708] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"130.668928ms","start":"2025-11-19T03:03:39.024817Z","end":"2025-11-19T03:03:39.155486Z","steps":["trace[1237855708] 'process raft request'  (duration: 119.066454ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T03:03:39.206504Z","caller":"traceutil/trace.go:172","msg":"trace[1473659866] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:382; }","duration":"132.085618ms","start":"2025-11-19T03:03:39.024762Z","end":"2025-11-19T03:03:39.156848Z","steps":["trace[1473659866] 'agreement among raft nodes before linearized reading'  (duration: 126.714014ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:03:42 up 10:45,  0 user,  load average: 6.16, 4.05, 3.01
	Linux newest-cni-886248 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c89d2d32905ce72ca83e136fb45a84f991f2b931e91c1dffc2e0ed955fa67f6e] <==
	I1119 03:03:39.825911       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 03:03:39.826528       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 03:03:39.826653       1 main.go:148] setting mtu 1500 for CNI 
	I1119 03:03:39.826666       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 03:03:39.826677       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T03:03:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 03:03:39.956332       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 03:03:40.027105       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 03:03:40.027148       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 03:03:40.028156       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [e1a362cbc41028d0ac9f2d6937946f5574b2d3fd5da85d517fcaaec1305dd7c2] <==
	I1119 03:03:30.297620       1 cache.go:39] Caches are synced for autoregister controller
	I1119 03:03:30.350237       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 03:03:30.367143       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 03:03:30.369830       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:03:30.376806       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 03:03:30.404873       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:03:30.406936       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 03:03:30.406960       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 03:03:30.888774       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 03:03:30.904785       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 03:03:30.904815       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 03:03:32.106736       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 03:03:32.159501       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 03:03:32.242711       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 03:03:32.326392       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 03:03:32.353989       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 03:03:32.355504       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 03:03:32.391191       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 03:03:33.178259       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 03:03:33.206661       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 03:03:33.219012       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 03:03:38.229897       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 03:03:38.429790       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 03:03:38.587352       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:03:38.757130       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [31d55e897c892ab76b067f55784174e3ad4845ff4a3ab0e8f4fa3f4d3efeea9b] <==
	I1119 03:03:37.355881       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 03:03:37.355898       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 03:03:37.291311       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 03:03:37.301328       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 03:03:37.346310       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 03:03:37.346331       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 03:03:37.346352       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 03:03:37.352377       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 03:03:37.363206       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 03:03:37.386591       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 03:03:37.389771       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:03:37.401293       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:03:37.434650       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 03:03:37.434729       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 03:03:37.434807       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 03:03:37.434877       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 03:03:37.435052       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:03:37.435174       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 03:03:37.435280       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-886248"
	I1119 03:03:37.435319       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 03:03:37.436524       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 03:03:37.444674       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:03:37.444777       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 03:03:37.444809       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 03:03:37.448517       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	
	
	==> kube-proxy [72e27cdd6ca2f79246ac8c9fda849eed4a1734f07a65bfcf0fdde9bb94f8f7c3] <==
	I1119 03:03:40.719953       1 server_linux.go:53] "Using iptables proxy"
	I1119 03:03:40.791239       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 03:03:40.891950       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 03:03:40.892046       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 03:03:40.892166       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 03:03:41.013408       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 03:03:41.013462       1 server_linux.go:132] "Using iptables Proxier"
	I1119 03:03:41.029634       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 03:03:41.030001       1 server.go:527] "Version info" version="v1.34.1"
	I1119 03:03:41.030061       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:03:41.039787       1 config.go:200] "Starting service config controller"
	I1119 03:03:41.039887       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 03:03:41.041041       1 config.go:106] "Starting endpoint slice config controller"
	I1119 03:03:41.041115       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 03:03:41.041182       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 03:03:41.041229       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 03:03:41.041885       1 config.go:309] "Starting node config controller"
	I1119 03:03:41.041954       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 03:03:41.041984       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 03:03:41.140286       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 03:03:41.141425       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 03:03:41.141453       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6862ca38077823de2c6e0c59d0995a22dc2814acad8c924e15cab9abc37430be] <==
	E1119 03:03:30.458058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 03:03:30.458197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 03:03:30.458416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 03:03:30.458507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 03:03:30.458601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 03:03:30.458685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 03:03:30.458790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 03:03:30.458884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 03:03:30.458976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 03:03:30.459074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 03:03:30.459179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 03:03:30.459290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 03:03:30.459401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 03:03:30.459446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 03:03:31.275706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 03:03:31.276241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 03:03:31.303377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 03:03:31.345253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 03:03:31.385228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 03:03:31.388726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 03:03:31.403453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 03:03:31.423149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 03:03:31.510779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 03:03:31.513296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1119 03:03:33.987368       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 03:03:34 newest-cni-886248 kubelet[1314]: I1119 03:03:34.268944    1314 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-886248"
	Nov 19 03:03:34 newest-cni-886248 kubelet[1314]: I1119 03:03:34.269410    1314 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-886248"
	Nov 19 03:03:34 newest-cni-886248 kubelet[1314]: E1119 03:03:34.309452    1314 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-886248\" already exists" pod="kube-system/etcd-newest-cni-886248"
	Nov 19 03:03:34 newest-cni-886248 kubelet[1314]: E1119 03:03:34.312960    1314 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-886248\" already exists" pod="kube-system/kube-scheduler-newest-cni-886248"
	Nov 19 03:03:34 newest-cni-886248 kubelet[1314]: I1119 03:03:34.320614    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-886248" podStartSLOduration=3.320597738 podStartE2EDuration="3.320597738s" podCreationTimestamp="2025-11-19 03:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:03:34.31739253 +0000 UTC m=+1.284509482" watchObservedRunningTime="2025-11-19 03:03:34.320597738 +0000 UTC m=+1.287714691"
	Nov 19 03:03:34 newest-cni-886248 kubelet[1314]: I1119 03:03:34.353317    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-886248" podStartSLOduration=1.353289856 podStartE2EDuration="1.353289856s" podCreationTimestamp="2025-11-19 03:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:03:34.346088692 +0000 UTC m=+1.313205653" watchObservedRunningTime="2025-11-19 03:03:34.353289856 +0000 UTC m=+1.320406809"
	Nov 19 03:03:34 newest-cni-886248 kubelet[1314]: I1119 03:03:34.373949    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-886248" podStartSLOduration=1.37393148 podStartE2EDuration="1.37393148s" podCreationTimestamp="2025-11-19 03:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:03:34.371987675 +0000 UTC m=+1.339104636" watchObservedRunningTime="2025-11-19 03:03:34.37393148 +0000 UTC m=+1.341048433"
	Nov 19 03:03:34 newest-cni-886248 kubelet[1314]: I1119 03:03:34.649014    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-886248" podStartSLOduration=1.6489968689999999 podStartE2EDuration="1.648996869s" podCreationTimestamp="2025-11-19 03:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:03:34.39842206 +0000 UTC m=+1.365539037" watchObservedRunningTime="2025-11-19 03:03:34.648996869 +0000 UTC m=+1.616113822"
	Nov 19 03:03:37 newest-cni-886248 kubelet[1314]: I1119 03:03:37.372324    1314 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 19 03:03:37 newest-cni-886248 kubelet[1314]: I1119 03:03:37.375090    1314 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 19 03:03:38 newest-cni-886248 kubelet[1314]: I1119 03:03:38.753963    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/baa5b1cf-5f4f-4ca9-959c-af74d9f62f83-cni-cfg\") pod \"kindnet-wbjgj\" (UID: \"baa5b1cf-5f4f-4ca9-959c-af74d9f62f83\") " pod="kube-system/kindnet-wbjgj"
	Nov 19 03:03:38 newest-cni-886248 kubelet[1314]: I1119 03:03:38.754159    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baa5b1cf-5f4f-4ca9-959c-af74d9f62f83-lib-modules\") pod \"kindnet-wbjgj\" (UID: \"baa5b1cf-5f4f-4ca9-959c-af74d9f62f83\") " pod="kube-system/kindnet-wbjgj"
	Nov 19 03:03:38 newest-cni-886248 kubelet[1314]: I1119 03:03:38.754238    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5prmw\" (UniqueName: \"kubernetes.io/projected/baa5b1cf-5f4f-4ca9-959c-af74d9f62f83-kube-api-access-5prmw\") pod \"kindnet-wbjgj\" (UID: \"baa5b1cf-5f4f-4ca9-959c-af74d9f62f83\") " pod="kube-system/kindnet-wbjgj"
	Nov 19 03:03:38 newest-cni-886248 kubelet[1314]: I1119 03:03:38.754263    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baa5b1cf-5f4f-4ca9-959c-af74d9f62f83-xtables-lock\") pod \"kindnet-wbjgj\" (UID: \"baa5b1cf-5f4f-4ca9-959c-af74d9f62f83\") " pod="kube-system/kindnet-wbjgj"
	Nov 19 03:03:38 newest-cni-886248 kubelet[1314]: E1119 03:03:38.851370    1314 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-886248\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-886248' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 19 03:03:38 newest-cni-886248 kubelet[1314]: I1119 03:03:38.856265    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f-kube-proxy\") pod \"kube-proxy-kn684\" (UID: \"f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f\") " pod="kube-system/kube-proxy-kn684"
	Nov 19 03:03:38 newest-cni-886248 kubelet[1314]: I1119 03:03:38.856320    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f-xtables-lock\") pod \"kube-proxy-kn684\" (UID: \"f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f\") " pod="kube-system/kube-proxy-kn684"
	Nov 19 03:03:38 newest-cni-886248 kubelet[1314]: I1119 03:03:38.856338    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f-lib-modules\") pod \"kube-proxy-kn684\" (UID: \"f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f\") " pod="kube-system/kube-proxy-kn684"
	Nov 19 03:03:38 newest-cni-886248 kubelet[1314]: I1119 03:03:38.856369    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpk87\" (UniqueName: \"kubernetes.io/projected/f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f-kube-api-access-lpk87\") pod \"kube-proxy-kn684\" (UID: \"f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f\") " pod="kube-system/kube-proxy-kn684"
	Nov 19 03:03:39 newest-cni-886248 kubelet[1314]: I1119 03:03:39.244564    1314 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 03:03:39 newest-cni-886248 kubelet[1314]: W1119 03:03:39.586927    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/crio-912c0b1b4f48e48ac5711c5fa5288fba46e1be50cb9bcc620622d7ad37c2f855 WatchSource:0}: Error finding container 912c0b1b4f48e48ac5711c5fa5288fba46e1be50cb9bcc620622d7ad37c2f855: Status 404 returned error can't find the container with id 912c0b1b4f48e48ac5711c5fa5288fba46e1be50cb9bcc620622d7ad37c2f855
	Nov 19 03:03:39 newest-cni-886248 kubelet[1314]: E1119 03:03:39.965906    1314 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 19 03:03:39 newest-cni-886248 kubelet[1314]: E1119 03:03:39.966019    1314 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f-kube-proxy podName:f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f nodeName:}" failed. No retries permitted until 2025-11-19 03:03:40.465990775 +0000 UTC m=+7.433107728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f-kube-proxy") pod "kube-proxy-kn684" (UID: "f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f") : failed to sync configmap cache: timed out waiting for the condition
	Nov 19 03:03:40 newest-cni-886248 kubelet[1314]: W1119 03:03:40.600130    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/crio-e4238022febd3f011cbe43884d427bdf65990262e36b7686a262b6f22400dd0d WatchSource:0}: Error finding container e4238022febd3f011cbe43884d427bdf65990262e36b7686a262b6f22400dd0d: Status 404 returned error can't find the container with id e4238022febd3f011cbe43884d427bdf65990262e36b7686a262b6f22400dd0d
	Nov 19 03:03:41 newest-cni-886248 kubelet[1314]: I1119 03:03:41.362842    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wbjgj" podStartSLOduration=3.362824673 podStartE2EDuration="3.362824673s" podCreationTimestamp="2025-11-19 03:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:03:40.378540848 +0000 UTC m=+7.345657817" watchObservedRunningTime="2025-11-19 03:03:41.362824673 +0000 UTC m=+8.329941634"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-886248 -n newest-cni-886248
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-886248 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-wh5wb storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-886248 describe pod coredns-66bc5c9577-wh5wb storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-886248 describe pod coredns-66bc5c9577-wh5wb storage-provisioner: exit status 1 (86.10104ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-wh5wb" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-886248 describe pod coredns-66bc5c9577-wh5wb storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-886248 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-886248 --alsologtostderr -v=1: exit status 80 (1.825856765s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-886248 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 03:04:01.263039 1672062 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:04:01.263226 1672062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:04:01.263253 1672062 out.go:374] Setting ErrFile to fd 2...
	I1119 03:04:01.263274 1672062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:04:01.263715 1672062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:04:01.264469 1672062 out.go:368] Setting JSON to false
	I1119 03:04:01.264549 1672062 mustload.go:66] Loading cluster: newest-cni-886248
	I1119 03:04:01.265038 1672062 config.go:182] Loaded profile config "newest-cni-886248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:04:01.265761 1672062 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:04:01.296451 1672062 host.go:66] Checking if "newest-cni-886248" exists ...
	I1119 03:04:01.296851 1672062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:04:01.369020 1672062 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 03:04:01.357356544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:04:01.370010 1672062 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-886248 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 03:04:01.373699 1672062 out.go:179] * Pausing node newest-cni-886248 ... 
	I1119 03:04:01.376707 1672062 host.go:66] Checking if "newest-cni-886248" exists ...
	I1119 03:04:01.377041 1672062 ssh_runner.go:195] Run: systemctl --version
	I1119 03:04:01.377093 1672062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:04:01.402144 1672062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:04:01.508601 1672062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:04:01.523907 1672062 pause.go:52] kubelet running: true
	I1119 03:04:01.523975 1672062 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 03:04:01.765000 1672062 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 03:04:01.765242 1672062 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 03:04:01.880722 1672062 cri.go:89] found id: "972e1acc7cbb218ba9da3cd0acc60b2dc76dcbd471980cb3d49154c780725250"
	I1119 03:04:01.880742 1672062 cri.go:89] found id: "6fe616ee0273c716f6e7a6fb7b7ac5a8ff750f30a0b8d2ed8d266d7ad6a45adc"
	I1119 03:04:01.880747 1672062 cri.go:89] found id: "e1eed48672587b0f0942e9efafdc58e46f8385b96a631acf88ebc24aca51da13"
	I1119 03:04:01.880751 1672062 cri.go:89] found id: "6d128baea56ffd78984f08dc3dc92a053e8b13d6136d8a220e0fb895c448d4be"
	I1119 03:04:01.880754 1672062 cri.go:89] found id: "ffb0198ce7f012092c5e61eeb22ee641ada1a435b1cd87da1b7ad5f0d00519fc"
	I1119 03:04:01.880758 1672062 cri.go:89] found id: "da8265e8e46cd2db7db56b1bcfe9737eace63b799347a005a9b97166455a3aff"
	I1119 03:04:01.880761 1672062 cri.go:89] found id: ""
	I1119 03:04:01.880809 1672062 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 03:04:01.898170 1672062 retry.go:31] will retry after 190.68499ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:04:01Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:04:02.089647 1672062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:04:02.103968 1672062 pause.go:52] kubelet running: false
	I1119 03:04:02.104092 1672062 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 03:04:02.277379 1672062 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 03:04:02.277585 1672062 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 03:04:02.366234 1672062 cri.go:89] found id: "972e1acc7cbb218ba9da3cd0acc60b2dc76dcbd471980cb3d49154c780725250"
	I1119 03:04:02.366314 1672062 cri.go:89] found id: "6fe616ee0273c716f6e7a6fb7b7ac5a8ff750f30a0b8d2ed8d266d7ad6a45adc"
	I1119 03:04:02.366347 1672062 cri.go:89] found id: "e1eed48672587b0f0942e9efafdc58e46f8385b96a631acf88ebc24aca51da13"
	I1119 03:04:02.366365 1672062 cri.go:89] found id: "6d128baea56ffd78984f08dc3dc92a053e8b13d6136d8a220e0fb895c448d4be"
	I1119 03:04:02.366397 1672062 cri.go:89] found id: "ffb0198ce7f012092c5e61eeb22ee641ada1a435b1cd87da1b7ad5f0d00519fc"
	I1119 03:04:02.366422 1672062 cri.go:89] found id: "da8265e8e46cd2db7db56b1bcfe9737eace63b799347a005a9b97166455a3aff"
	I1119 03:04:02.366440 1672062 cri.go:89] found id: ""
	I1119 03:04:02.366550 1672062 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 03:04:02.379574 1672062 retry.go:31] will retry after 373.397046ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:04:02Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:04:02.753182 1672062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:04:02.766675 1672062 pause.go:52] kubelet running: false
	I1119 03:04:02.766774 1672062 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 03:04:02.910301 1672062 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 03:04:02.910384 1672062 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 03:04:02.978689 1672062 cri.go:89] found id: "972e1acc7cbb218ba9da3cd0acc60b2dc76dcbd471980cb3d49154c780725250"
	I1119 03:04:02.978713 1672062 cri.go:89] found id: "6fe616ee0273c716f6e7a6fb7b7ac5a8ff750f30a0b8d2ed8d266d7ad6a45adc"
	I1119 03:04:02.978718 1672062 cri.go:89] found id: "e1eed48672587b0f0942e9efafdc58e46f8385b96a631acf88ebc24aca51da13"
	I1119 03:04:02.978722 1672062 cri.go:89] found id: "6d128baea56ffd78984f08dc3dc92a053e8b13d6136d8a220e0fb895c448d4be"
	I1119 03:04:02.978725 1672062 cri.go:89] found id: "ffb0198ce7f012092c5e61eeb22ee641ada1a435b1cd87da1b7ad5f0d00519fc"
	I1119 03:04:02.978728 1672062 cri.go:89] found id: "da8265e8e46cd2db7db56b1bcfe9737eace63b799347a005a9b97166455a3aff"
	I1119 03:04:02.978732 1672062 cri.go:89] found id: ""
	I1119 03:04:02.978789 1672062 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 03:04:02.994079 1672062 out.go:203] 
	W1119 03:04:02.997081 1672062 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:04:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:04:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 03:04:02.997125 1672062 out.go:285] * 
	* 
	W1119 03:04:03.008191 1672062 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 03:04:03.013267 1672062 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-886248 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-886248
helpers_test.go:243: (dbg) docker inspect newest-cni-886248:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578",
	        "Created": "2025-11-19T03:02:56.888437987Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1670302,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T03:03:45.860284425Z",
	            "FinishedAt": "2025-11-19T03:03:44.773135045Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/hosts",
	        "LogPath": "/var/lib/docker/containers/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578-json.log",
	        "Name": "/newest-cni-886248",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-886248:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-886248",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578",
	                "LowerDir": "/var/lib/docker/overlay2/a7a465fbddf49e2c56bd6046cea36a4642d75f6313895eecf81a83070429de04-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a7a465fbddf49e2c56bd6046cea36a4642d75f6313895eecf81a83070429de04/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a7a465fbddf49e2c56bd6046cea36a4642d75f6313895eecf81a83070429de04/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a7a465fbddf49e2c56bd6046cea36a4642d75f6313895eecf81a83070429de04/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-886248",
	                "Source": "/var/lib/docker/volumes/newest-cni-886248/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-886248",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-886248",
	                "name.minikube.sigs.k8s.io": "newest-cni-886248",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d66a7bcf483640e1a9f4a322ed49bd824d1931e43ad50c9d2ade68147972e1f1",
	            "SandboxKey": "/var/run/docker/netns/d66a7bcf4836",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34935"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34936"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34939"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34937"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34938"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-886248": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:9e:64:02:3c:e5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "40af1a67c106c706985a7e4604847892fa565af460e6b79b193e66105f198b32",
	                    "EndpointID": "d992c5be63db9e61bd795622a740a6a7856427e87be5ecb400293d30b934d6bc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-886248",
	                        "9ceb6de1b4d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-886248 -n newest-cni-886248
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-886248 -n newest-cni-886248: exit status 2 (354.645736ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-886248 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-886248 logs -n 25: (1.100718281s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-579203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-579203 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-592123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p embed-certs-592123 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-579203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ addons  │ enable dashboard -p embed-certs-592123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ image   │ default-k8s-diff-port-579203 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p default-k8s-diff-port-579203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p disable-driver-mounts-722439                                                                                                                                                                                                               │ disable-driver-mounts-722439 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:03 UTC │
	│ image   │ embed-certs-592123 image list --format=json                                                                                                                                                                                                   │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p embed-certs-592123 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p embed-certs-592123                                                                                                                                                                                                                         │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p embed-certs-592123                                                                                                                                                                                                                         │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:03 UTC │
	│ addons  │ enable metrics-server -p newest-cni-886248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │                     │
	│ stop    │ -p newest-cni-886248 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:03 UTC │
	│ addons  │ enable dashboard -p newest-cni-886248 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:03 UTC │
	│ start   │ -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:04 UTC │
	│ image   │ newest-cni-886248 image list --format=json                                                                                                                                                                                                    │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:04 UTC │
	│ pause   │ -p newest-cni-886248 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 03:03:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 03:03:45.591904 1670171 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:03:45.592050 1670171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:03:45.592063 1670171 out.go:374] Setting ErrFile to fd 2...
	I1119 03:03:45.592081 1670171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:03:45.592410 1670171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:03:45.592842 1670171 out.go:368] Setting JSON to false
	I1119 03:03:45.593878 1670171 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38753,"bootTime":1763482673,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 03:03:45.593951 1670171 start.go:143] virtualization:  
	I1119 03:03:45.597601 1670171 out.go:179] * [newest-cni-886248] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 03:03:45.601450 1670171 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 03:03:45.601549 1670171 notify.go:221] Checking for updates...
	I1119 03:03:45.607475 1670171 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 03:03:45.610383 1670171 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:03:45.613272 1670171 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 03:03:45.616127 1670171 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 03:03:45.618961 1670171 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 03:03:45.622305 1670171 config.go:182] Loaded profile config "newest-cni-886248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:03:45.622921 1670171 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 03:03:45.642673 1670171 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 03:03:45.642800 1670171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:03:45.699691 1670171 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 03:03:45.690485624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:03:45.699792 1670171 docker.go:319] overlay module found
	I1119 03:03:45.702930 1670171 out.go:179] * Using the docker driver based on existing profile
	I1119 03:03:45.705891 1670171 start.go:309] selected driver: docker
	I1119 03:03:45.705931 1670171 start.go:930] validating driver "docker" against &{Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:03:45.706030 1670171 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 03:03:45.706748 1670171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:03:45.759617 1670171 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 03:03:45.750891827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:03:45.759984 1670171 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 03:03:45.760016 1670171 cni.go:84] Creating CNI manager for ""
	I1119 03:03:45.760074 1670171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:03:45.760121 1670171 start.go:353] cluster config:
	{Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:03:45.763259 1670171 out.go:179] * Starting "newest-cni-886248" primary control-plane node in "newest-cni-886248" cluster
	I1119 03:03:45.766086 1670171 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 03:03:45.769085 1670171 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 03:03:45.776464 1670171 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:03:45.776530 1670171 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 03:03:45.776541 1670171 cache.go:65] Caching tarball of preloaded images
	I1119 03:03:45.776564 1670171 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 03:03:45.776640 1670171 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 03:03:45.776651 1670171 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 03:03:45.776764 1670171 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/config.json ...
	I1119 03:03:45.796299 1670171 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 03:03:45.796323 1670171 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 03:03:45.796337 1670171 cache.go:243] Successfully downloaded all kic artifacts
	I1119 03:03:45.796360 1670171 start.go:360] acquireMachinesLock for newest-cni-886248: {Name:mkfb71f15fb61e4b42e0e59e9b569595aaffd1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:03:45.796418 1670171 start.go:364] duration metric: took 36.208µs to acquireMachinesLock for "newest-cni-886248"
	I1119 03:03:45.796442 1670171 start.go:96] Skipping create...Using existing machine configuration
	I1119 03:03:45.796451 1670171 fix.go:54] fixHost starting: 
	I1119 03:03:45.796704 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:45.813325 1670171 fix.go:112] recreateIfNeeded on newest-cni-886248: state=Stopped err=<nil>
	W1119 03:03:45.813357 1670171 fix.go:138] unexpected machine state, will restart: <nil>
	W1119 03:03:47.534190 1662687 node_ready.go:57] node "no-preload-800908" has "Ready":"False" status (will retry)
	W1119 03:03:49.534810 1662687 node_ready.go:57] node "no-preload-800908" has "Ready":"False" status (will retry)
	I1119 03:03:45.816577 1670171 out.go:252] * Restarting existing docker container for "newest-cni-886248" ...
	I1119 03:03:45.816672 1670171 cli_runner.go:164] Run: docker start newest-cni-886248
	I1119 03:03:46.111155 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:46.133780 1670171 kic.go:430] container "newest-cni-886248" state is running.
	I1119 03:03:46.134294 1670171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886248
	I1119 03:03:46.165832 1670171 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/config.json ...
	I1119 03:03:46.166055 1670171 machine.go:94] provisionDockerMachine start ...
	I1119 03:03:46.166113 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:46.193071 1670171 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:46.193388 1670171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34935 <nil> <nil>}
	I1119 03:03:46.193398 1670171 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 03:03:46.194175 1670171 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 03:03:49.341005 1670171 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-886248
	
	I1119 03:03:49.341030 1670171 ubuntu.go:182] provisioning hostname "newest-cni-886248"
	I1119 03:03:49.341151 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:49.359764 1670171 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:49.360071 1670171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34935 <nil> <nil>}
	I1119 03:03:49.360088 1670171 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-886248 && echo "newest-cni-886248" | sudo tee /etc/hostname
	I1119 03:03:49.511179 1670171 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-886248
	
	I1119 03:03:49.511253 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:49.530369 1670171 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:49.530682 1670171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34935 <nil> <nil>}
	I1119 03:03:49.530705 1670171 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-886248' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-886248/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-886248' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 03:03:49.669701 1670171 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 03:03:49.669771 1670171 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 03:03:49.669819 1670171 ubuntu.go:190] setting up certificates
	I1119 03:03:49.669858 1670171 provision.go:84] configureAuth start
	I1119 03:03:49.670007 1670171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886248
	I1119 03:03:49.687518 1670171 provision.go:143] copyHostCerts
	I1119 03:03:49.687594 1670171 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 03:03:49.687611 1670171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 03:03:49.687686 1670171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 03:03:49.687785 1670171 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 03:03:49.687790 1670171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 03:03:49.687815 1670171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 03:03:49.687869 1670171 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 03:03:49.687873 1670171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 03:03:49.687907 1670171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 03:03:49.688000 1670171 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.newest-cni-886248 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-886248]
	I1119 03:03:50.073650 1670171 provision.go:177] copyRemoteCerts
	I1119 03:03:50.073720 1670171 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 03:03:50.073771 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.092763 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:50.198820 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 03:03:50.218965 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 03:03:50.240614 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 03:03:50.257971 1670171 provision.go:87] duration metric: took 588.071039ms to configureAuth
	I1119 03:03:50.257999 1670171 ubuntu.go:206] setting minikube options for container-runtime
	I1119 03:03:50.258207 1670171 config.go:182] Loaded profile config "newest-cni-886248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:03:50.258311 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.279488 1670171 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:50.279799 1670171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34935 <nil> <nil>}
	I1119 03:03:50.279814 1670171 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 03:03:50.612777 1670171 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 03:03:50.612824 1670171 machine.go:97] duration metric: took 4.446759249s to provisionDockerMachine
	I1119 03:03:50.612836 1670171 start.go:293] postStartSetup for "newest-cni-886248" (driver="docker")
	I1119 03:03:50.612847 1670171 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 03:03:50.612915 1670171 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 03:03:50.612971 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.630696 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:50.729202 1670171 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 03:03:50.732546 1670171 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 03:03:50.732574 1670171 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 03:03:50.732585 1670171 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 03:03:50.732638 1670171 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 03:03:50.732719 1670171 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 03:03:50.732818 1670171 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 03:03:50.740334 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:03:50.758017 1670171 start.go:296] duration metric: took 145.165054ms for postStartSetup
	I1119 03:03:50.758134 1670171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 03:03:50.758204 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.775332 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:50.874646 1670171 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 03:03:50.879898 1670171 fix.go:56] duration metric: took 5.083439603s for fixHost
	I1119 03:03:50.879932 1670171 start.go:83] releasing machines lock for "newest-cni-886248", held for 5.083490334s
	I1119 03:03:50.880002 1670171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886248
	I1119 03:03:50.898647 1670171 ssh_runner.go:195] Run: cat /version.json
	I1119 03:03:50.898693 1670171 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 03:03:50.898701 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.898767 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.918189 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:50.931779 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:51.021667 1670171 ssh_runner.go:195] Run: systemctl --version
	I1119 03:03:51.117597 1670171 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 03:03:51.158694 1670171 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 03:03:51.163262 1670171 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 03:03:51.163354 1670171 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 03:03:51.171346 1670171 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 03:03:51.171425 1670171 start.go:496] detecting cgroup driver to use...
	I1119 03:03:51.171471 1670171 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 03:03:51.171553 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 03:03:51.187671 1670171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 03:03:51.200786 1670171 docker.go:218] disabling cri-docker service (if available) ...
	I1119 03:03:51.200896 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 03:03:51.216379 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 03:03:51.229862 1670171 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 03:03:51.347667 1670171 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 03:03:51.471731 1670171 docker.go:234] disabling docker service ...
	I1119 03:03:51.471852 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 03:03:51.488567 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 03:03:51.501380 1670171 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 03:03:51.616293 1670171 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 03:03:51.750642 1670171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 03:03:51.764770 1670171 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 03:03:51.780195 1670171 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 03:03:51.780293 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.789091 1670171 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 03:03:51.789187 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.800544 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.809156 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.817934 1670171 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 03:03:51.826258 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.835386 1670171 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.843742 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.857715 1670171 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 03:03:51.865787 1670171 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 03:03:51.872921 1670171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:51.991413 1670171 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 03:03:52.174666 1670171 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 03:03:52.174809 1670171 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 03:03:52.178820 1670171 start.go:564] Will wait 60s for crictl version
	I1119 03:03:52.178905 1670171 ssh_runner.go:195] Run: which crictl
	I1119 03:03:52.182623 1670171 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 03:03:52.212784 1670171 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 03:03:52.212891 1670171 ssh_runner.go:195] Run: crio --version
	I1119 03:03:52.240641 1670171 ssh_runner.go:195] Run: crio --version
	I1119 03:03:52.274720 1670171 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 03:03:52.277698 1670171 cli_runner.go:164] Run: docker network inspect newest-cni-886248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:03:52.294004 1670171 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 03:03:52.297976 1670171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:03:52.310508 1670171 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 03:03:52.313227 1670171 kubeadm.go:884] updating cluster {Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 03:03:52.313380 1670171 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:03:52.313477 1670171 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:03:52.352180 1670171 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:03:52.352206 1670171 crio.go:433] Images already preloaded, skipping extraction
	I1119 03:03:52.352263 1670171 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:03:52.377469 1670171 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:03:52.377494 1670171 cache_images.go:86] Images are preloaded, skipping loading
	I1119 03:03:52.377502 1670171 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 03:03:52.377645 1670171 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-886248 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 03:03:52.377732 1670171 ssh_runner.go:195] Run: crio config
	I1119 03:03:52.439427 1670171 cni.go:84] Creating CNI manager for ""
	I1119 03:03:52.439453 1670171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:03:52.439477 1670171 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 03:03:52.439506 1670171 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-886248 NodeName:newest-cni-886248 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 03:03:52.439642 1670171 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-886248"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 03:03:52.439723 1670171 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 03:03:52.447553 1670171 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 03:03:52.447643 1670171 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 03:03:52.454917 1670171 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 03:03:52.467306 1670171 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 03:03:52.480129 1670171 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1119 03:03:52.493028 1670171 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 03:03:52.496761 1670171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:03:52.506994 1670171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:52.620332 1670171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:03:52.636936 1670171 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248 for IP: 192.168.76.2
	I1119 03:03:52.636959 1670171 certs.go:195] generating shared ca certs ...
	I1119 03:03:52.636981 1670171 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:52.637113 1670171 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 03:03:52.637157 1670171 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 03:03:52.637169 1670171 certs.go:257] generating profile certs ...
	I1119 03:03:52.637256 1670171 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/client.key
	I1119 03:03:52.637329 1670171 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.key.774757e0
	I1119 03:03:52.637375 1670171 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.key
	I1119 03:03:52.637497 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 03:03:52.637676 1670171 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 03:03:52.637693 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 03:03:52.637721 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 03:03:52.637744 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 03:03:52.637782 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 03:03:52.637834 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:03:52.638422 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 03:03:52.660084 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 03:03:52.678457 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 03:03:52.695538 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 03:03:52.712623 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 03:03:52.752891 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 03:03:52.781289 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 03:03:52.800723 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 03:03:52.818904 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 03:03:52.838576 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 03:03:52.859843 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 03:03:52.882715 1670171 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 03:03:52.896517 1670171 ssh_runner.go:195] Run: openssl version
	I1119 03:03:52.903148 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 03:03:52.911737 1670171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 03:03:52.915641 1670171 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 03:03:52.915745 1670171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 03:03:52.962206 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 03:03:52.970041 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 03:03:52.978260 1670171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:52.982117 1670171 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:52.982258 1670171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:53.023645 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 03:03:53.032824 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 03:03:53.041380 1670171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 03:03:53.045061 1670171 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 03:03:53.045134 1670171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 03:03:53.087163 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 03:03:53.095054 1670171 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 03:03:53.098639 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 03:03:53.140386 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 03:03:53.190466 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 03:03:53.233941 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 03:03:53.274859 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 03:03:53.318412 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 03:03:53.389564 1670171 kubeadm.go:401] StartCluster: {Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:03:53.389709 1670171 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 03:03:53.389803 1670171 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 03:03:53.518324 1670171 cri.go:89] found id: "e1eed48672587b0f0942e9efafdc58e46f8385b96a631acf88ebc24aca51da13"
	I1119 03:03:53.518395 1670171 cri.go:89] found id: "6d128baea56ffd78984f08dc3dc92a053e8b13d6136d8a220e0fb895c448d4be"
	I1119 03:03:53.518413 1670171 cri.go:89] found id: "ffb0198ce7f012092c5e61eeb22ee641ada1a435b1cd87da1b7ad5f0d00519fc"
	I1119 03:03:53.518429 1670171 cri.go:89] found id: "da8265e8e46cd2db7db56b1bcfe9737eace63b799347a005a9b97166455a3aff"
	I1119 03:03:53.518446 1670171 cri.go:89] found id: ""
	I1119 03:03:53.518526 1670171 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 03:03:53.565185 1670171 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:03:53Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:03:53.565278 1670171 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 03:03:53.595288 1670171 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 03:03:53.595309 1670171 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 03:03:53.595359 1670171 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 03:03:53.618818 1670171 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 03:03:53.619483 1670171 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-886248" does not appear in /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:03:53.619808 1670171 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-1463525/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-886248" cluster setting kubeconfig missing "newest-cni-886248" context setting]
	I1119 03:03:53.620351 1670171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:53.622714 1670171 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 03:03:53.641984 1670171 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1119 03:03:53.642062 1670171 kubeadm.go:602] duration metric: took 46.746271ms to restartPrimaryControlPlane
	I1119 03:03:53.642086 1670171 kubeadm.go:403] duration metric: took 252.531076ms to StartCluster
	I1119 03:03:53.642129 1670171 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:53.642225 1670171 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:03:53.649998 1670171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:53.650250 1670171 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:03:53.651635 1670171 config.go:182] Loaded profile config "newest-cni-886248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:03:53.651683 1670171 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:03:53.651746 1670171 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-886248"
	I1119 03:03:53.651761 1670171 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-886248"
	W1119 03:03:53.651767 1670171 addons.go:248] addon storage-provisioner should already be in state true
	I1119 03:03:53.651788 1670171 host.go:66] Checking if "newest-cni-886248" exists ...
	I1119 03:03:53.652206 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:53.652503 1670171 addons.go:70] Setting dashboard=true in profile "newest-cni-886248"
	I1119 03:03:53.652539 1670171 addons.go:239] Setting addon dashboard=true in "newest-cni-886248"
	W1119 03:03:53.652724 1670171 addons.go:248] addon dashboard should already be in state true
	I1119 03:03:53.652762 1670171 host.go:66] Checking if "newest-cni-886248" exists ...
	I1119 03:03:53.652676 1670171 addons.go:70] Setting default-storageclass=true in profile "newest-cni-886248"
	I1119 03:03:53.652946 1670171 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-886248"
	I1119 03:03:53.653236 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:53.655486 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:53.657151 1670171 out.go:179] * Verifying Kubernetes components...
	I1119 03:03:53.660375 1670171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:53.708744 1670171 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:03:53.708811 1670171 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 03:03:53.712928 1670171 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:03:53.712951 1670171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:03:53.713018 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:53.716417 1670171 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1119 03:03:52.033911 1662687 node_ready.go:57] node "no-preload-800908" has "Ready":"False" status (will retry)
	I1119 03:03:53.536114 1662687 node_ready.go:49] node "no-preload-800908" is "Ready"
	I1119 03:03:53.536140 1662687 node_ready.go:38] duration metric: took 14.505081158s for node "no-preload-800908" to be "Ready" ...
	I1119 03:03:53.536155 1662687 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:03:53.536210 1662687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:03:53.568713 1662687 api_server.go:72] duration metric: took 16.982212001s to wait for apiserver process to appear ...
	I1119 03:03:53.568735 1662687 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:03:53.568758 1662687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 03:03:53.578652 1662687 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 03:03:53.580029 1662687 api_server.go:141] control plane version: v1.34.1
	I1119 03:03:53.580053 1662687 api_server.go:131] duration metric: took 11.310896ms to wait for apiserver health ...
	I1119 03:03:53.580062 1662687 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:03:53.584040 1662687 system_pods.go:59] 8 kube-system pods found
	I1119 03:03:53.584071 1662687 system_pods.go:61] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:03:53.584078 1662687 system_pods.go:61] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:53.584085 1662687 system_pods.go:61] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:53.584089 1662687 system_pods.go:61] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:53.584094 1662687 system_pods.go:61] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:53.584099 1662687 system_pods.go:61] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:53.584103 1662687 system_pods.go:61] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:53.584109 1662687 system_pods.go:61] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:03:53.584116 1662687 system_pods.go:74] duration metric: took 4.0479ms to wait for pod list to return data ...
	I1119 03:03:53.584125 1662687 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:03:53.592683 1662687 default_sa.go:45] found service account: "default"
	I1119 03:03:53.592707 1662687 default_sa.go:55] duration metric: took 8.57618ms for default service account to be created ...
	I1119 03:03:53.592716 1662687 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 03:03:53.607009 1662687 system_pods.go:86] 8 kube-system pods found
	I1119 03:03:53.607092 1662687 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:03:53.607113 1662687 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:53.607154 1662687 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:53.607192 1662687 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:53.607211 1662687 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:53.607243 1662687 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:53.607264 1662687 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:53.607284 1662687 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:03:53.607335 1662687 retry.go:31] will retry after 244.100565ms: missing components: kube-dns
	I1119 03:03:53.868052 1662687 system_pods.go:86] 8 kube-system pods found
	I1119 03:03:53.868084 1662687 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:03:53.868091 1662687 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:53.868097 1662687 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:53.868102 1662687 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:53.868106 1662687 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:53.868110 1662687 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:53.868114 1662687 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:53.868119 1662687 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:03:53.868136 1662687 retry.go:31] will retry after 284.240962ms: missing components: kube-dns
	I1119 03:03:54.156303 1662687 system_pods.go:86] 8 kube-system pods found
	I1119 03:03:54.156335 1662687 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:03:54.156341 1662687 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:54.156349 1662687 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:54.156353 1662687 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:54.156358 1662687 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:54.156363 1662687 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:54.156367 1662687 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:54.156373 1662687 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:03:54.156387 1662687 retry.go:31] will retry after 477.419711ms: missing components: kube-dns
	I1119 03:03:54.637298 1662687 system_pods.go:86] 8 kube-system pods found
	I1119 03:03:54.637327 1662687 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Running
	I1119 03:03:54.637333 1662687 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:54.637337 1662687 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:54.637341 1662687 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:54.637346 1662687 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:54.637350 1662687 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:54.637354 1662687 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:54.637357 1662687 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Running
	I1119 03:03:54.637365 1662687 system_pods.go:126] duration metric: took 1.044642345s to wait for k8s-apps to be running ...
	I1119 03:03:54.637372 1662687 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 03:03:54.637426 1662687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:03:54.656787 1662687 system_svc.go:56] duration metric: took 19.404605ms WaitForService to wait for kubelet
	I1119 03:03:54.656811 1662687 kubeadm.go:587] duration metric: took 18.070315566s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:03:54.656830 1662687 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:03:54.665899 1662687 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:03:54.665938 1662687 node_conditions.go:123] node cpu capacity is 2
	I1119 03:03:54.665951 1662687 node_conditions.go:105] duration metric: took 9.115833ms to run NodePressure ...
	I1119 03:03:54.665963 1662687 start.go:242] waiting for startup goroutines ...
	I1119 03:03:54.665970 1662687 start.go:247] waiting for cluster config update ...
	I1119 03:03:54.665981 1662687 start.go:256] writing updated cluster config ...
	I1119 03:03:54.666261 1662687 ssh_runner.go:195] Run: rm -f paused
	I1119 03:03:54.673938 1662687 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:03:54.677320 1662687 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5gb8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.683543 1662687 pod_ready.go:94] pod "coredns-66bc5c9577-5gb8d" is "Ready"
	I1119 03:03:54.683608 1662687 pod_ready.go:86] duration metric: took 6.268752ms for pod "coredns-66bc5c9577-5gb8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.685784 1662687 pod_ready.go:83] waiting for pod "etcd-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.689860 1662687 pod_ready.go:94] pod "etcd-no-preload-800908" is "Ready"
	I1119 03:03:54.689879 1662687 pod_ready.go:86] duration metric: took 4.031581ms for pod "etcd-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.692734 1662687 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.697827 1662687 pod_ready.go:94] pod "kube-apiserver-no-preload-800908" is "Ready"
	I1119 03:03:54.697846 1662687 pod_ready.go:86] duration metric: took 5.096781ms for pod "kube-apiserver-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.702276 1662687 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:55.077996 1662687 pod_ready.go:94] pod "kube-controller-manager-no-preload-800908" is "Ready"
	I1119 03:03:55.078071 1662687 pod_ready.go:86] duration metric: took 375.775165ms for pod "kube-controller-manager-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:55.279496 1662687 pod_ready.go:83] waiting for pod "kube-proxy-59bnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:53.719259 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 03:03:53.719292 1670171 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 03:03:53.719354 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:53.722356 1670171 addons.go:239] Setting addon default-storageclass=true in "newest-cni-886248"
	W1119 03:03:53.722383 1670171 addons.go:248] addon default-storageclass should already be in state true
	I1119 03:03:53.722410 1670171 host.go:66] Checking if "newest-cni-886248" exists ...
	I1119 03:03:53.722888 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:53.746132 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:53.774883 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:53.781432 1670171 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:03:53.781453 1670171 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:03:53.781566 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:53.808908 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:54.126795 1670171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:03:54.142862 1670171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:03:54.173563 1670171 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:03:54.173687 1670171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:03:54.197448 1670171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:03:54.220431 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 03:03:54.220509 1670171 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 03:03:54.310138 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 03:03:54.310212 1670171 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 03:03:54.410928 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 03:03:54.410999 1670171 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 03:03:54.466241 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 03:03:54.466309 1670171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 03:03:54.502112 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 03:03:54.502185 1670171 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 03:03:54.516936 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 03:03:54.517016 1670171 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 03:03:54.532280 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 03:03:54.532353 1670171 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 03:03:54.550862 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 03:03:54.550923 1670171 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 03:03:54.566140 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:03:54.566211 1670171 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 03:03:54.580770 1670171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:03:55.678128 1662687 pod_ready.go:94] pod "kube-proxy-59bnq" is "Ready"
	I1119 03:03:55.678202 1662687 pod_ready.go:86] duration metric: took 398.633383ms for pod "kube-proxy-59bnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:55.878078 1662687 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:56.280084 1662687 pod_ready.go:94] pod "kube-scheduler-no-preload-800908" is "Ready"
	I1119 03:03:56.280115 1662687 pod_ready.go:86] duration metric: took 401.964995ms for pod "kube-scheduler-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:56.280129 1662687 pod_ready.go:40] duration metric: took 1.606162948s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:03:56.379901 1662687 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:03:56.383386 1662687 out.go:179] * Done! kubectl is now configured to use "no-preload-800908" cluster and "default" namespace by default
	I1119 03:04:00.091208 1670171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.94826581s)
	I1119 03:04:00.091280 1670171 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.917554633s)
	I1119 03:04:00.091293 1670171 api_server.go:72] duration metric: took 6.441014887s to wait for apiserver process to appear ...
	I1119 03:04:00.091299 1670171 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:04:00.091317 1670171 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 03:04:00.091696 1670171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.894146877s)
	I1119 03:04:00.092051 1670171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.511211055s)
	I1119 03:04:00.108787 1670171 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-886248 addons enable metrics-server
	
	I1119 03:04:00.117105 1670171 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 03:04:00.132270 1670171 api_server.go:141] control plane version: v1.34.1
	I1119 03:04:00.132302 1670171 api_server.go:131] duration metric: took 40.994961ms to wait for apiserver health ...
	I1119 03:04:00.132313 1670171 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:04:00.178031 1670171 system_pods.go:59] 8 kube-system pods found
	I1119 03:04:00.178146 1670171 system_pods.go:61] "coredns-66bc5c9577-wh5wb" [92363de0-8e50-45e7-84f7-8d0e20fa6d64] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 03:04:00.178192 1670171 system_pods.go:61] "etcd-newest-cni-886248" [5dc760bc-b71b-4b72-b27d-abf96ba66665] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:04:00.178224 1670171 system_pods.go:61] "kindnet-wbjgj" [baa5b1cf-5f4f-4ca9-959c-af74d9f62f83] Running
	I1119 03:04:00.178251 1670171 system_pods.go:61] "kube-apiserver-newest-cni-886248" [f48c4478-6515-4447-a2d8-bc8683421e68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 03:04:00.178289 1670171 system_pods.go:61] "kube-controller-manager-newest-cni-886248" [78d87a76-a5af-4b59-9688-1f684aa4eb86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 03:04:00.178319 1670171 system_pods.go:61] "kube-proxy-kn684" [f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 03:04:00.178364 1670171 system_pods.go:61] "kube-scheduler-newest-cni-886248" [9d4bee4f-21a5-4c71-9174-885f35f536ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:04:00.178394 1670171 system_pods.go:61] "storage-provisioner" [4b774a63-0385-4354-91d0-0f4824a9a758] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 03:04:00.178417 1670171 system_pods.go:74] duration metric: took 46.0965ms to wait for pod list to return data ...
	I1119 03:04:00.178462 1670171 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:04:00.204225 1670171 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 03:04:00.207261 1670171 addons.go:515] duration metric: took 6.55555183s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 03:04:00.208313 1670171 default_sa.go:45] found service account: "default"
	I1119 03:04:00.208398 1670171 default_sa.go:55] duration metric: took 29.912743ms for default service account to be created ...
	I1119 03:04:00.208454 1670171 kubeadm.go:587] duration metric: took 6.558172341s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 03:04:00.208502 1670171 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:04:00.237215 1670171 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:04:00.237308 1670171 node_conditions.go:123] node cpu capacity is 2
	I1119 03:04:00.237340 1670171 node_conditions.go:105] duration metric: took 28.811186ms to run NodePressure ...
	I1119 03:04:00.237382 1670171 start.go:242] waiting for startup goroutines ...
	I1119 03:04:00.237407 1670171 start.go:247] waiting for cluster config update ...
	I1119 03:04:00.237434 1670171 start.go:256] writing updated cluster config ...
	I1119 03:04:00.237945 1670171 ssh_runner.go:195] Run: rm -f paused
	I1119 03:04:00.440062 1670171 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:04:00.443326 1670171 out.go:179] * Done! kubectl is now configured to use "newest-cni-886248" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.087271675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.092978572Z" level=info msg="Running pod sandbox: kube-system/kindnet-wbjgj/POD" id=978306de-5286-4342-b431-e3c227ba835b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.093043242Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.1195832Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3892463a-f8c4-4b27-ae30-771eb94f65ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.120523513Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=978306de-5286-4342-b431-e3c227ba835b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.138976588Z" level=info msg="Ran pod sandbox 5bd27b879481c772b1d1f849f72e4f37cbc6faf8b1680002728fc29f231b289b with infra container: kube-system/kube-proxy-kn684/POD" id=3892463a-f8c4-4b27-ae30-771eb94f65ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.146241291Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8c25f2f1-f926-4320-a062-7101b2db5da3 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.147763134Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0d5a712b-f562-409b-b899-ed3b707d0918 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.150312559Z" level=info msg="Creating container: kube-system/kube-proxy-kn684/kube-proxy" id=2009cf19-3bdf-49ef-a363-7a0930b8af64 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.152252926Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.152286123Z" level=info msg="Ran pod sandbox 825c53ee89e4c089491320ebe97678466d051b9d8c323193e2646be2ac95a30e with infra container: kube-system/kindnet-wbjgj/POD" id=978306de-5286-4342-b431-e3c227ba835b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.154741478Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a0ec2727-2ce9-4876-ba78-b983dce1d416 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.157982394Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c0da5eb2-5637-423b-b894-8c56c2927df0 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.159124409Z" level=info msg="Creating container: kube-system/kindnet-wbjgj/kindnet-cni" id=7ed85e74-d746-497b-9329-a95d63033b6d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.159420515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.172461791Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.172950129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.177780268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.178469963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.266322772Z" level=info msg="Created container 972e1acc7cbb218ba9da3cd0acc60b2dc76dcbd471980cb3d49154c780725250: kube-system/kube-proxy-kn684/kube-proxy" id=2009cf19-3bdf-49ef-a363-7a0930b8af64 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.277998925Z" level=info msg="Starting container: 972e1acc7cbb218ba9da3cd0acc60b2dc76dcbd471980cb3d49154c780725250" id=da39b955-983c-4256-9812-f6a4c3598068 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.285824654Z" level=info msg="Started container" PID=1070 containerID=972e1acc7cbb218ba9da3cd0acc60b2dc76dcbd471980cb3d49154c780725250 description=kube-system/kube-proxy-kn684/kube-proxy id=da39b955-983c-4256-9812-f6a4c3598068 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5bd27b879481c772b1d1f849f72e4f37cbc6faf8b1680002728fc29f231b289b
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.302978353Z" level=info msg="Created container 6fe616ee0273c716f6e7a6fb7b7ac5a8ff750f30a0b8d2ed8d266d7ad6a45adc: kube-system/kindnet-wbjgj/kindnet-cni" id=7ed85e74-d746-497b-9329-a95d63033b6d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.306194048Z" level=info msg="Starting container: 6fe616ee0273c716f6e7a6fb7b7ac5a8ff750f30a0b8d2ed8d266d7ad6a45adc" id=9e69775d-c28b-4550-95ed-a725d6efe016 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.314750569Z" level=info msg="Started container" PID=1067 containerID=6fe616ee0273c716f6e7a6fb7b7ac5a8ff750f30a0b8d2ed8d266d7ad6a45adc description=kube-system/kindnet-wbjgj/kindnet-cni id=9e69775d-c28b-4550-95ed-a725d6efe016 name=/runtime.v1.RuntimeService/StartContainer sandboxID=825c53ee89e4c089491320ebe97678466d051b9d8c323193e2646be2ac95a30e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	972e1acc7cbb2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   5bd27b879481c       kube-proxy-kn684                            kube-system
	6fe616ee0273c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               1                   825c53ee89e4c       kindnet-wbjgj                               kube-system
	e1eed48672587       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   a1f61d1ec2a87       kube-controller-manager-newest-cni-886248   kube-system
	6d128baea56ff       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   5faeba16edaea       kube-apiserver-newest-cni-886248            kube-system
	ffb0198ce7f01       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   6cdbcd70f4e9c       etcd-newest-cni-886248                      kube-system
	da8265e8e46cd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            1                   e9a85763e408b       kube-scheduler-newest-cni-886248            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-886248
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-886248
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=newest-cni-886248
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T03_03_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 03:03:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-886248
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 03:03:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 03:03:58 +0000   Wed, 19 Nov 2025 03:03:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 03:03:58 +0000   Wed, 19 Nov 2025 03:03:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 03:03:58 +0000   Wed, 19 Nov 2025 03:03:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 19 Nov 2025 03:03:58 +0000   Wed, 19 Nov 2025 03:03:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-886248
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                aa6cbc50-f2b0-4528-80c3-566034a2d86c
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-886248                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-wbjgj                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-886248             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-886248    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-kn684                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-886248             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 23s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node newest-cni-886248 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node newest-cni-886248 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node newest-cni-886248 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    31s                kubelet          Node newest-cni-886248 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  31s                kubelet          Node newest-cni-886248 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     31s                kubelet          Node newest-cni-886248 status is now: NodeHasSufficientPID
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-886248 event: Registered Node newest-cni-886248 in Controller
	  Normal   Starting                 12s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x2 over 12s)  kubelet          Node newest-cni-886248 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x2 over 12s)  kubelet          Node newest-cni-886248 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x2 over 12s)  kubelet          Node newest-cni-886248 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-886248 event: Registered Node newest-cni-886248 in Controller
	
	
	==> dmesg <==
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	[Nov19 03:00] overlayfs: idmapped layers are currently not supported
	[  +8.385032] overlayfs: idmapped layers are currently not supported
	[Nov19 03:01] overlayfs: idmapped layers are currently not supported
	[  +9.842210] overlayfs: idmapped layers are currently not supported
	[Nov19 03:02] overlayfs: idmapped layers are currently not supported
	[Nov19 03:03] overlayfs: idmapped layers are currently not supported
	[ +33.377847] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ffb0198ce7f012092c5e61eeb22ee641ada1a435b1cd87da1b7ad5f0d00519fc] <==
	{"level":"warn","ts":"2025-11-19T03:03:56.602107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.666960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.740376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.753118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.773577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.805630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.821592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.849379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.878463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.932073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.953691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.990573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.031733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.055545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.097976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.143156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.217527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.235523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.281879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.333476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.383950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.419382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.434528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.465953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.535722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44876","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:04:04 up 10:46,  0 user,  load average: 6.02, 4.18, 3.08
	Linux newest-cni-886248 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6fe616ee0273c716f6e7a6fb7b7ac5a8ff750f30a0b8d2ed8d266d7ad6a45adc] <==
	I1119 03:03:59.425827       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 03:03:59.431652       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 03:03:59.434439       1 main.go:148] setting mtu 1500 for CNI 
	I1119 03:03:59.434465       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 03:03:59.434480       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T03:03:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 03:03:59.627786       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 03:03:59.627803       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 03:03:59.627820       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 03:03:59.628728       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [6d128baea56ffd78984f08dc3dc92a053e8b13d6136d8a220e0fb895c448d4be] <==
	I1119 03:03:58.525764       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 03:03:58.525771       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 03:03:58.525776       1 cache.go:39] Caches are synced for autoregister controller
	I1119 03:03:58.584835       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 03:03:58.590210       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:03:58.596822       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 03:03:58.596942       1 policy_source.go:240] refreshing policies
	I1119 03:03:58.597014       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 03:03:58.597024       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 03:03:58.597162       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 03:03:58.598057       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 03:03:58.598836       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 03:03:58.605894       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 03:03:58.800205       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 03:03:59.373442       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 03:03:59.723691       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 03:03:59.778625       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 03:03:59.806034       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 03:03:59.817141       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 03:03:59.878429       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.116.57"}
	I1119 03:03:59.907223       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.70.15"}
	I1119 03:04:01.979512       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 03:04:02.268607       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 03:04:02.414344       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 03:04:02.465250       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e1eed48672587b0f0942e9efafdc58e46f8385b96a631acf88ebc24aca51da13] <==
	I1119 03:04:01.907313       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 03:04:01.908472       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 03:04:01.909592       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 03:04:01.909670       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 03:04:01.909863       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 03:04:01.912305       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 03:04:01.913495       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 03:04:01.913578       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 03:04:01.913618       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 03:04:01.913748       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 03:04:01.915981       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 03:04:01.916409       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 03:04:01.926659       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 03:04:01.928934       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:04:01.932051       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 03:04:01.934327       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 03:04:01.935767       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 03:04:01.938294       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:04:01.941432       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 03:04:01.942647       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 03:04:01.946011       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 03:04:01.965789       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:04:01.965817       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 03:04:01.965823       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 03:04:01.966341       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [972e1acc7cbb218ba9da3cd0acc60b2dc76dcbd471980cb3d49154c780725250] <==
	I1119 03:03:59.617736       1 server_linux.go:53] "Using iptables proxy"
	I1119 03:03:59.887145       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 03:03:59.988269       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 03:03:59.988303       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 03:03:59.988394       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 03:04:00.155979       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 03:04:00.156129       1 server_linux.go:132] "Using iptables Proxier"
	I1119 03:04:00.163083       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 03:04:00.163669       1 server.go:527] "Version info" version="v1.34.1"
	I1119 03:04:00.163742       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:04:00.173609       1 config.go:200] "Starting service config controller"
	I1119 03:04:00.173760       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 03:04:00.183193       1 config.go:106] "Starting endpoint slice config controller"
	I1119 03:04:00.183236       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 03:04:00.183272       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 03:04:00.183277       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 03:04:00.184049       1 config.go:309] "Starting node config controller"
	I1119 03:04:00.184062       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 03:04:00.184068       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 03:04:00.283023       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 03:04:00.284259       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 03:04:00.284474       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [da8265e8e46cd2db7db56b1bcfe9737eace63b799347a005a9b97166455a3aff] <==
	I1119 03:03:55.800056       1 serving.go:386] Generated self-signed cert in-memory
	I1119 03:03:59.007536       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 03:03:59.007596       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:03:59.066231       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 03:03:59.066349       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 03:03:59.066373       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 03:03:59.066420       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 03:03:59.077088       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:03:59.077132       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:03:59.078726       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:03:59.078749       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:03:59.166674       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 03:03:59.183304       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:03:59.287815       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.549657     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: E1119 03:03:58.710913     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-886248\" already exists" pod="kube-system/etcd-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.710952     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: E1119 03:03:58.711111     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-886248\" already exists" pod="kube-system/kube-scheduler-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.720270     734 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.720369     734 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.720398     734 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.721419     734 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: E1119 03:03:58.744481     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-886248\" already exists" pod="kube-system/kube-apiserver-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.744508     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.752708     734 apiserver.go:52] "Watching apiserver"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: E1119 03:03:58.767404     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-886248\" already exists" pod="kube-system/kube-controller-manager-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.767538     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.767492     734 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: E1119 03:03:58.786878     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-886248\" already exists" pod="kube-system/kube-scheduler-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.792173     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f-xtables-lock\") pod \"kube-proxy-kn684\" (UID: \"f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f\") " pod="kube-system/kube-proxy-kn684"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.792249     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/baa5b1cf-5f4f-4ca9-959c-af74d9f62f83-cni-cfg\") pod \"kindnet-wbjgj\" (UID: \"baa5b1cf-5f4f-4ca9-959c-af74d9f62f83\") " pod="kube-system/kindnet-wbjgj"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.792270     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baa5b1cf-5f4f-4ca9-959c-af74d9f62f83-lib-modules\") pod \"kindnet-wbjgj\" (UID: \"baa5b1cf-5f4f-4ca9-959c-af74d9f62f83\") " pod="kube-system/kindnet-wbjgj"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.792312     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baa5b1cf-5f4f-4ca9-959c-af74d9f62f83-xtables-lock\") pod \"kindnet-wbjgj\" (UID: \"baa5b1cf-5f4f-4ca9-959c-af74d9f62f83\") " pod="kube-system/kindnet-wbjgj"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.792335     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f-lib-modules\") pod \"kube-proxy-kn684\" (UID: \"f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f\") " pod="kube-system/kube-proxy-kn684"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.822054     734 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 03:03:59 newest-cni-886248 kubelet[734]: W1119 03:03:59.146729     734 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/crio-825c53ee89e4c089491320ebe97678466d051b9d8c323193e2646be2ac95a30e WatchSource:0}: Error finding container 825c53ee89e4c089491320ebe97678466d051b9d8c323193e2646be2ac95a30e: Status 404 returned error can't find the container with id 825c53ee89e4c089491320ebe97678466d051b9d8c323193e2646be2ac95a30e
	Nov 19 03:04:01 newest-cni-886248 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 03:04:01 newest-cni-886248 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 03:04:01 newest-cni-886248 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-886248 -n newest-cni-886248
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-886248 -n newest-cni-886248: exit status 2 (370.033167ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-886248 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-wh5wb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jjfww kubernetes-dashboard-855c9754f9-jd7tx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-886248 describe pod coredns-66bc5c9577-wh5wb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jjfww kubernetes-dashboard-855c9754f9-jd7tx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-886248 describe pod coredns-66bc5c9577-wh5wb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jjfww kubernetes-dashboard-855c9754f9-jd7tx: exit status 1 (89.789606ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-wh5wb" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-jjfww" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-jd7tx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-886248 describe pod coredns-66bc5c9577-wh5wb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jjfww kubernetes-dashboard-855c9754f9-jd7tx: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-886248
helpers_test.go:243: (dbg) docker inspect newest-cni-886248:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578",
	        "Created": "2025-11-19T03:02:56.888437987Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1670302,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T03:03:45.860284425Z",
	            "FinishedAt": "2025-11-19T03:03:44.773135045Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/hosts",
	        "LogPath": "/var/lib/docker/containers/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578-json.log",
	        "Name": "/newest-cni-886248",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-886248:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-886248",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578",
	                "LowerDir": "/var/lib/docker/overlay2/a7a465fbddf49e2c56bd6046cea36a4642d75f6313895eecf81a83070429de04-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a7a465fbddf49e2c56bd6046cea36a4642d75f6313895eecf81a83070429de04/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a7a465fbddf49e2c56bd6046cea36a4642d75f6313895eecf81a83070429de04/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a7a465fbddf49e2c56bd6046cea36a4642d75f6313895eecf81a83070429de04/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-886248",
	                "Source": "/var/lib/docker/volumes/newest-cni-886248/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-886248",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-886248",
	                "name.minikube.sigs.k8s.io": "newest-cni-886248",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d66a7bcf483640e1a9f4a322ed49bd824d1931e43ad50c9d2ade68147972e1f1",
	            "SandboxKey": "/var/run/docker/netns/d66a7bcf4836",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34935"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34936"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34939"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34937"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34938"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-886248": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:9e:64:02:3c:e5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "40af1a67c106c706985a7e4604847892fa565af460e6b79b193e66105f198b32",
	                    "EndpointID": "d992c5be63db9e61bd795622a740a6a7856427e87be5ecb400293d30b934d6bc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-886248",
	                        "9ceb6de1b4d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-886248 -n newest-cni-886248
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-886248 -n newest-cni-886248: exit status 2 (340.879959ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-886248 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-886248 logs -n 25: (1.095935607s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-579203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-579203 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-592123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p embed-certs-592123 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-579203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ addons  │ enable dashboard -p embed-certs-592123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ image   │ default-k8s-diff-port-579203 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p default-k8s-diff-port-579203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p disable-driver-mounts-722439                                                                                                                                                                                                               │ disable-driver-mounts-722439 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:03 UTC │
	│ image   │ embed-certs-592123 image list --format=json                                                                                                                                                                                                   │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p embed-certs-592123 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p embed-certs-592123                                                                                                                                                                                                                         │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p embed-certs-592123                                                                                                                                                                                                                         │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:03 UTC │
	│ addons  │ enable metrics-server -p newest-cni-886248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │                     │
	│ stop    │ -p newest-cni-886248 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:03 UTC │
	│ addons  │ enable dashboard -p newest-cni-886248 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:03 UTC │
	│ start   │ -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:04 UTC │
	│ image   │ newest-cni-886248 image list --format=json                                                                                                                                                                                                    │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:04 UTC │
	│ pause   │ -p newest-cni-886248 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 03:03:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 03:03:45.591904 1670171 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:03:45.592050 1670171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:03:45.592063 1670171 out.go:374] Setting ErrFile to fd 2...
	I1119 03:03:45.592081 1670171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:03:45.592410 1670171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:03:45.592842 1670171 out.go:368] Setting JSON to false
	I1119 03:03:45.593878 1670171 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38753,"bootTime":1763482673,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 03:03:45.593951 1670171 start.go:143] virtualization:  
	I1119 03:03:45.597601 1670171 out.go:179] * [newest-cni-886248] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 03:03:45.601450 1670171 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 03:03:45.601549 1670171 notify.go:221] Checking for updates...
	I1119 03:03:45.607475 1670171 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 03:03:45.610383 1670171 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:03:45.613272 1670171 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 03:03:45.616127 1670171 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 03:03:45.618961 1670171 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 03:03:45.622305 1670171 config.go:182] Loaded profile config "newest-cni-886248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:03:45.622921 1670171 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 03:03:45.642673 1670171 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 03:03:45.642800 1670171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:03:45.699691 1670171 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 03:03:45.690485624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:03:45.699792 1670171 docker.go:319] overlay module found
	I1119 03:03:45.702930 1670171 out.go:179] * Using the docker driver based on existing profile
	I1119 03:03:45.705891 1670171 start.go:309] selected driver: docker
	I1119 03:03:45.705931 1670171 start.go:930] validating driver "docker" against &{Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:03:45.706030 1670171 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 03:03:45.706748 1670171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:03:45.759617 1670171 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 03:03:45.750891827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:03:45.759984 1670171 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 03:03:45.760016 1670171 cni.go:84] Creating CNI manager for ""
	I1119 03:03:45.760074 1670171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:03:45.760121 1670171 start.go:353] cluster config:
	{Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:03:45.763259 1670171 out.go:179] * Starting "newest-cni-886248" primary control-plane node in "newest-cni-886248" cluster
	I1119 03:03:45.766086 1670171 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 03:03:45.769085 1670171 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 03:03:45.776464 1670171 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:03:45.776530 1670171 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 03:03:45.776541 1670171 cache.go:65] Caching tarball of preloaded images
	I1119 03:03:45.776564 1670171 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 03:03:45.776640 1670171 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 03:03:45.776651 1670171 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 03:03:45.776764 1670171 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/config.json ...
	I1119 03:03:45.796299 1670171 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 03:03:45.796323 1670171 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 03:03:45.796337 1670171 cache.go:243] Successfully downloaded all kic artifacts
	I1119 03:03:45.796360 1670171 start.go:360] acquireMachinesLock for newest-cni-886248: {Name:mkfb71f15fb61e4b42e0e59e9b569595aaffd1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:03:45.796418 1670171 start.go:364] duration metric: took 36.208µs to acquireMachinesLock for "newest-cni-886248"
	I1119 03:03:45.796442 1670171 start.go:96] Skipping create...Using existing machine configuration
	I1119 03:03:45.796451 1670171 fix.go:54] fixHost starting: 
	I1119 03:03:45.796704 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:45.813325 1670171 fix.go:112] recreateIfNeeded on newest-cni-886248: state=Stopped err=<nil>
	W1119 03:03:45.813357 1670171 fix.go:138] unexpected machine state, will restart: <nil>
	W1119 03:03:47.534190 1662687 node_ready.go:57] node "no-preload-800908" has "Ready":"False" status (will retry)
	W1119 03:03:49.534810 1662687 node_ready.go:57] node "no-preload-800908" has "Ready":"False" status (will retry)
	I1119 03:03:45.816577 1670171 out.go:252] * Restarting existing docker container for "newest-cni-886248" ...
	I1119 03:03:45.816672 1670171 cli_runner.go:164] Run: docker start newest-cni-886248
	I1119 03:03:46.111155 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:46.133780 1670171 kic.go:430] container "newest-cni-886248" state is running.
	I1119 03:03:46.134294 1670171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886248
	I1119 03:03:46.165832 1670171 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/config.json ...
	I1119 03:03:46.166055 1670171 machine.go:94] provisionDockerMachine start ...
	I1119 03:03:46.166113 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:46.193071 1670171 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:46.193388 1670171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34935 <nil> <nil>}
	I1119 03:03:46.193398 1670171 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 03:03:46.194175 1670171 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 03:03:49.341005 1670171 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-886248
	
	I1119 03:03:49.341030 1670171 ubuntu.go:182] provisioning hostname "newest-cni-886248"
	I1119 03:03:49.341151 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:49.359764 1670171 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:49.360071 1670171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34935 <nil> <nil>}
	I1119 03:03:49.360088 1670171 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-886248 && echo "newest-cni-886248" | sudo tee /etc/hostname
	I1119 03:03:49.511179 1670171 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-886248
	
	I1119 03:03:49.511253 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:49.530369 1670171 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:49.530682 1670171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34935 <nil> <nil>}
	I1119 03:03:49.530705 1670171 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-886248' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-886248/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-886248' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 03:03:49.669701 1670171 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 03:03:49.669771 1670171 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 03:03:49.669819 1670171 ubuntu.go:190] setting up certificates
	I1119 03:03:49.669858 1670171 provision.go:84] configureAuth start
	I1119 03:03:49.670007 1670171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886248
	I1119 03:03:49.687518 1670171 provision.go:143] copyHostCerts
	I1119 03:03:49.687594 1670171 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 03:03:49.687611 1670171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 03:03:49.687686 1670171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 03:03:49.687785 1670171 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 03:03:49.687790 1670171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 03:03:49.687815 1670171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 03:03:49.687869 1670171 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 03:03:49.687873 1670171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 03:03:49.687907 1670171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 03:03:49.688000 1670171 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.newest-cni-886248 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-886248]
	I1119 03:03:50.073650 1670171 provision.go:177] copyRemoteCerts
	I1119 03:03:50.073720 1670171 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 03:03:50.073771 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.092763 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:50.198820 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 03:03:50.218965 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 03:03:50.240614 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 03:03:50.257971 1670171 provision.go:87] duration metric: took 588.071039ms to configureAuth
	I1119 03:03:50.257999 1670171 ubuntu.go:206] setting minikube options for container-runtime
	I1119 03:03:50.258207 1670171 config.go:182] Loaded profile config "newest-cni-886248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:03:50.258311 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.279488 1670171 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:50.279799 1670171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34935 <nil> <nil>}
	I1119 03:03:50.279814 1670171 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 03:03:50.612777 1670171 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 03:03:50.612824 1670171 machine.go:97] duration metric: took 4.446759249s to provisionDockerMachine
	I1119 03:03:50.612836 1670171 start.go:293] postStartSetup for "newest-cni-886248" (driver="docker")
	I1119 03:03:50.612847 1670171 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 03:03:50.612915 1670171 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 03:03:50.612971 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.630696 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:50.729202 1670171 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 03:03:50.732546 1670171 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 03:03:50.732574 1670171 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 03:03:50.732585 1670171 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 03:03:50.732638 1670171 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 03:03:50.732719 1670171 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 03:03:50.732818 1670171 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 03:03:50.740334 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:03:50.758017 1670171 start.go:296] duration metric: took 145.165054ms for postStartSetup
	I1119 03:03:50.758134 1670171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 03:03:50.758204 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.775332 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:50.874646 1670171 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 03:03:50.879898 1670171 fix.go:56] duration metric: took 5.083439603s for fixHost
	I1119 03:03:50.879932 1670171 start.go:83] releasing machines lock for "newest-cni-886248", held for 5.083490334s
	I1119 03:03:50.880002 1670171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886248
	I1119 03:03:50.898647 1670171 ssh_runner.go:195] Run: cat /version.json
	I1119 03:03:50.898693 1670171 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 03:03:50.898701 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.898767 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.918189 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:50.931779 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:51.021667 1670171 ssh_runner.go:195] Run: systemctl --version
	I1119 03:03:51.117597 1670171 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 03:03:51.158694 1670171 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 03:03:51.163262 1670171 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 03:03:51.163354 1670171 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 03:03:51.171346 1670171 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 03:03:51.171425 1670171 start.go:496] detecting cgroup driver to use...
	I1119 03:03:51.171471 1670171 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 03:03:51.171553 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 03:03:51.187671 1670171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 03:03:51.200786 1670171 docker.go:218] disabling cri-docker service (if available) ...
	I1119 03:03:51.200896 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 03:03:51.216379 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 03:03:51.229862 1670171 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 03:03:51.347667 1670171 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 03:03:51.471731 1670171 docker.go:234] disabling docker service ...
	I1119 03:03:51.471852 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 03:03:51.488567 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 03:03:51.501380 1670171 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 03:03:51.616293 1670171 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 03:03:51.750642 1670171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 03:03:51.764770 1670171 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 03:03:51.780195 1670171 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 03:03:51.780293 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.789091 1670171 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 03:03:51.789187 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.800544 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.809156 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.817934 1670171 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 03:03:51.826258 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.835386 1670171 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.843742 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.857715 1670171 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 03:03:51.865787 1670171 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 03:03:51.872921 1670171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:51.991413 1670171 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 03:03:52.174666 1670171 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 03:03:52.174809 1670171 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 03:03:52.178820 1670171 start.go:564] Will wait 60s for crictl version
	I1119 03:03:52.178905 1670171 ssh_runner.go:195] Run: which crictl
	I1119 03:03:52.182623 1670171 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 03:03:52.212784 1670171 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 03:03:52.212891 1670171 ssh_runner.go:195] Run: crio --version
	I1119 03:03:52.240641 1670171 ssh_runner.go:195] Run: crio --version
	I1119 03:03:52.274720 1670171 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 03:03:52.277698 1670171 cli_runner.go:164] Run: docker network inspect newest-cni-886248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:03:52.294004 1670171 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 03:03:52.297976 1670171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:03:52.310508 1670171 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 03:03:52.313227 1670171 kubeadm.go:884] updating cluster {Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 03:03:52.313380 1670171 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:03:52.313477 1670171 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:03:52.352180 1670171 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:03:52.352206 1670171 crio.go:433] Images already preloaded, skipping extraction
	I1119 03:03:52.352263 1670171 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:03:52.377469 1670171 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:03:52.377494 1670171 cache_images.go:86] Images are preloaded, skipping loading
	I1119 03:03:52.377502 1670171 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 03:03:52.377645 1670171 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-886248 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 03:03:52.377732 1670171 ssh_runner.go:195] Run: crio config
	I1119 03:03:52.439427 1670171 cni.go:84] Creating CNI manager for ""
	I1119 03:03:52.439453 1670171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:03:52.439477 1670171 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 03:03:52.439506 1670171 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-886248 NodeName:newest-cni-886248 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 03:03:52.439642 1670171 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-886248"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 03:03:52.439723 1670171 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 03:03:52.447553 1670171 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 03:03:52.447643 1670171 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 03:03:52.454917 1670171 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 03:03:52.467306 1670171 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 03:03:52.480129 1670171 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1119 03:03:52.493028 1670171 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 03:03:52.496761 1670171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:03:52.506994 1670171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:52.620332 1670171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:03:52.636936 1670171 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248 for IP: 192.168.76.2
	I1119 03:03:52.636959 1670171 certs.go:195] generating shared ca certs ...
	I1119 03:03:52.636981 1670171 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:52.637113 1670171 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 03:03:52.637157 1670171 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 03:03:52.637169 1670171 certs.go:257] generating profile certs ...
	I1119 03:03:52.637256 1670171 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/client.key
	I1119 03:03:52.637329 1670171 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.key.774757e0
	I1119 03:03:52.637375 1670171 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.key
	I1119 03:03:52.637497 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 03:03:52.637676 1670171 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 03:03:52.637693 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 03:03:52.637721 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 03:03:52.637744 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 03:03:52.637782 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 03:03:52.637834 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:03:52.638422 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 03:03:52.660084 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 03:03:52.678457 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 03:03:52.695538 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 03:03:52.712623 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 03:03:52.752891 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 03:03:52.781289 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 03:03:52.800723 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 03:03:52.818904 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 03:03:52.838576 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 03:03:52.859843 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 03:03:52.882715 1670171 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 03:03:52.896517 1670171 ssh_runner.go:195] Run: openssl version
	I1119 03:03:52.903148 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 03:03:52.911737 1670171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 03:03:52.915641 1670171 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 03:03:52.915745 1670171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 03:03:52.962206 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 03:03:52.970041 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 03:03:52.978260 1670171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:52.982117 1670171 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:52.982258 1670171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:53.023645 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 03:03:53.032824 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 03:03:53.041380 1670171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 03:03:53.045061 1670171 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 03:03:53.045134 1670171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 03:03:53.087163 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 03:03:53.095054 1670171 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 03:03:53.098639 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 03:03:53.140386 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 03:03:53.190466 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 03:03:53.233941 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 03:03:53.274859 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 03:03:53.318412 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 03:03:53.389564 1670171 kubeadm.go:401] StartCluster: {Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:03:53.389709 1670171 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 03:03:53.389803 1670171 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 03:03:53.518324 1670171 cri.go:89] found id: "e1eed48672587b0f0942e9efafdc58e46f8385b96a631acf88ebc24aca51da13"
	I1119 03:03:53.518395 1670171 cri.go:89] found id: "6d128baea56ffd78984f08dc3dc92a053e8b13d6136d8a220e0fb895c448d4be"
	I1119 03:03:53.518413 1670171 cri.go:89] found id: "ffb0198ce7f012092c5e61eeb22ee641ada1a435b1cd87da1b7ad5f0d00519fc"
	I1119 03:03:53.518429 1670171 cri.go:89] found id: "da8265e8e46cd2db7db56b1bcfe9737eace63b799347a005a9b97166455a3aff"
	I1119 03:03:53.518446 1670171 cri.go:89] found id: ""
	I1119 03:03:53.518526 1670171 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 03:03:53.565185 1670171 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:03:53Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:03:53.565278 1670171 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 03:03:53.595288 1670171 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 03:03:53.595309 1670171 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 03:03:53.595359 1670171 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 03:03:53.618818 1670171 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 03:03:53.619483 1670171 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-886248" does not appear in /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:03:53.619808 1670171 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-1463525/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-886248" cluster setting kubeconfig missing "newest-cni-886248" context setting]
	I1119 03:03:53.620351 1670171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:53.622714 1670171 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 03:03:53.641984 1670171 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1119 03:03:53.642062 1670171 kubeadm.go:602] duration metric: took 46.746271ms to restartPrimaryControlPlane
	I1119 03:03:53.642086 1670171 kubeadm.go:403] duration metric: took 252.531076ms to StartCluster
	I1119 03:03:53.642129 1670171 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:53.642225 1670171 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:03:53.649998 1670171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:53.650250 1670171 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:03:53.651635 1670171 config.go:182] Loaded profile config "newest-cni-886248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:03:53.651683 1670171 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:03:53.651746 1670171 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-886248"
	I1119 03:03:53.651761 1670171 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-886248"
	W1119 03:03:53.651767 1670171 addons.go:248] addon storage-provisioner should already be in state true
	I1119 03:03:53.651788 1670171 host.go:66] Checking if "newest-cni-886248" exists ...
	I1119 03:03:53.652206 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:53.652503 1670171 addons.go:70] Setting dashboard=true in profile "newest-cni-886248"
	I1119 03:03:53.652539 1670171 addons.go:239] Setting addon dashboard=true in "newest-cni-886248"
	W1119 03:03:53.652724 1670171 addons.go:248] addon dashboard should already be in state true
	I1119 03:03:53.652762 1670171 host.go:66] Checking if "newest-cni-886248" exists ...
	I1119 03:03:53.652676 1670171 addons.go:70] Setting default-storageclass=true in profile "newest-cni-886248"
	I1119 03:03:53.652946 1670171 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-886248"
	I1119 03:03:53.653236 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:53.655486 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:53.657151 1670171 out.go:179] * Verifying Kubernetes components...
	I1119 03:03:53.660375 1670171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:53.708744 1670171 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:03:53.708811 1670171 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 03:03:53.712928 1670171 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:03:53.712951 1670171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:03:53.713018 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:53.716417 1670171 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1119 03:03:52.033911 1662687 node_ready.go:57] node "no-preload-800908" has "Ready":"False" status (will retry)
	I1119 03:03:53.536114 1662687 node_ready.go:49] node "no-preload-800908" is "Ready"
	I1119 03:03:53.536140 1662687 node_ready.go:38] duration metric: took 14.505081158s for node "no-preload-800908" to be "Ready" ...
	I1119 03:03:53.536155 1662687 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:03:53.536210 1662687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:03:53.568713 1662687 api_server.go:72] duration metric: took 16.982212001s to wait for apiserver process to appear ...
	I1119 03:03:53.568735 1662687 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:03:53.568758 1662687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 03:03:53.578652 1662687 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 03:03:53.580029 1662687 api_server.go:141] control plane version: v1.34.1
	I1119 03:03:53.580053 1662687 api_server.go:131] duration metric: took 11.310896ms to wait for apiserver health ...
	I1119 03:03:53.580062 1662687 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:03:53.584040 1662687 system_pods.go:59] 8 kube-system pods found
	I1119 03:03:53.584071 1662687 system_pods.go:61] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:03:53.584078 1662687 system_pods.go:61] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:53.584085 1662687 system_pods.go:61] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:53.584089 1662687 system_pods.go:61] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:53.584094 1662687 system_pods.go:61] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:53.584099 1662687 system_pods.go:61] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:53.584103 1662687 system_pods.go:61] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:53.584109 1662687 system_pods.go:61] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:03:53.584116 1662687 system_pods.go:74] duration metric: took 4.0479ms to wait for pod list to return data ...
	I1119 03:03:53.584125 1662687 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:03:53.592683 1662687 default_sa.go:45] found service account: "default"
	I1119 03:03:53.592707 1662687 default_sa.go:55] duration metric: took 8.57618ms for default service account to be created ...
	I1119 03:03:53.592716 1662687 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 03:03:53.607009 1662687 system_pods.go:86] 8 kube-system pods found
	I1119 03:03:53.607092 1662687 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:03:53.607113 1662687 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:53.607154 1662687 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:53.607192 1662687 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:53.607211 1662687 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:53.607243 1662687 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:53.607264 1662687 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:53.607284 1662687 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:03:53.607335 1662687 retry.go:31] will retry after 244.100565ms: missing components: kube-dns
	I1119 03:03:53.868052 1662687 system_pods.go:86] 8 kube-system pods found
	I1119 03:03:53.868084 1662687 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:03:53.868091 1662687 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:53.868097 1662687 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:53.868102 1662687 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:53.868106 1662687 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:53.868110 1662687 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:53.868114 1662687 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:53.868119 1662687 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:03:53.868136 1662687 retry.go:31] will retry after 284.240962ms: missing components: kube-dns
	I1119 03:03:54.156303 1662687 system_pods.go:86] 8 kube-system pods found
	I1119 03:03:54.156335 1662687 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:03:54.156341 1662687 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:54.156349 1662687 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:54.156353 1662687 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:54.156358 1662687 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:54.156363 1662687 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:54.156367 1662687 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:54.156373 1662687 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:03:54.156387 1662687 retry.go:31] will retry after 477.419711ms: missing components: kube-dns
	I1119 03:03:54.637298 1662687 system_pods.go:86] 8 kube-system pods found
	I1119 03:03:54.637327 1662687 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Running
	I1119 03:03:54.637333 1662687 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:54.637337 1662687 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:54.637341 1662687 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:54.637346 1662687 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:54.637350 1662687 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:54.637354 1662687 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:54.637357 1662687 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Running
	I1119 03:03:54.637365 1662687 system_pods.go:126] duration metric: took 1.044642345s to wait for k8s-apps to be running ...
	I1119 03:03:54.637372 1662687 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 03:03:54.637426 1662687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:03:54.656787 1662687 system_svc.go:56] duration metric: took 19.404605ms WaitForService to wait for kubelet
	I1119 03:03:54.656811 1662687 kubeadm.go:587] duration metric: took 18.070315566s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:03:54.656830 1662687 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:03:54.665899 1662687 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:03:54.665938 1662687 node_conditions.go:123] node cpu capacity is 2
	I1119 03:03:54.665951 1662687 node_conditions.go:105] duration metric: took 9.115833ms to run NodePressure ...
	I1119 03:03:54.665963 1662687 start.go:242] waiting for startup goroutines ...
	I1119 03:03:54.665970 1662687 start.go:247] waiting for cluster config update ...
	I1119 03:03:54.665981 1662687 start.go:256] writing updated cluster config ...
	I1119 03:03:54.666261 1662687 ssh_runner.go:195] Run: rm -f paused
	I1119 03:03:54.673938 1662687 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:03:54.677320 1662687 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5gb8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.683543 1662687 pod_ready.go:94] pod "coredns-66bc5c9577-5gb8d" is "Ready"
	I1119 03:03:54.683608 1662687 pod_ready.go:86] duration metric: took 6.268752ms for pod "coredns-66bc5c9577-5gb8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.685784 1662687 pod_ready.go:83] waiting for pod "etcd-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.689860 1662687 pod_ready.go:94] pod "etcd-no-preload-800908" is "Ready"
	I1119 03:03:54.689879 1662687 pod_ready.go:86] duration metric: took 4.031581ms for pod "etcd-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.692734 1662687 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.697827 1662687 pod_ready.go:94] pod "kube-apiserver-no-preload-800908" is "Ready"
	I1119 03:03:54.697846 1662687 pod_ready.go:86] duration metric: took 5.096781ms for pod "kube-apiserver-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.702276 1662687 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:55.077996 1662687 pod_ready.go:94] pod "kube-controller-manager-no-preload-800908" is "Ready"
	I1119 03:03:55.078071 1662687 pod_ready.go:86] duration metric: took 375.775165ms for pod "kube-controller-manager-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:55.279496 1662687 pod_ready.go:83] waiting for pod "kube-proxy-59bnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:53.719259 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 03:03:53.719292 1670171 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 03:03:53.719354 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:53.722356 1670171 addons.go:239] Setting addon default-storageclass=true in "newest-cni-886248"
	W1119 03:03:53.722383 1670171 addons.go:248] addon default-storageclass should already be in state true
	I1119 03:03:53.722410 1670171 host.go:66] Checking if "newest-cni-886248" exists ...
	I1119 03:03:53.722888 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:53.746132 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:53.774883 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:53.781432 1670171 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:03:53.781453 1670171 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:03:53.781566 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:53.808908 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:54.126795 1670171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:03:54.142862 1670171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:03:54.173563 1670171 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:03:54.173687 1670171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:03:54.197448 1670171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:03:54.220431 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 03:03:54.220509 1670171 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 03:03:54.310138 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 03:03:54.310212 1670171 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 03:03:54.410928 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 03:03:54.410999 1670171 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 03:03:54.466241 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 03:03:54.466309 1670171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 03:03:54.502112 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 03:03:54.502185 1670171 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 03:03:54.516936 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 03:03:54.517016 1670171 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 03:03:54.532280 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 03:03:54.532353 1670171 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 03:03:54.550862 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 03:03:54.550923 1670171 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 03:03:54.566140 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:03:54.566211 1670171 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 03:03:54.580770 1670171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:03:55.678128 1662687 pod_ready.go:94] pod "kube-proxy-59bnq" is "Ready"
	I1119 03:03:55.678202 1662687 pod_ready.go:86] duration metric: took 398.633383ms for pod "kube-proxy-59bnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:55.878078 1662687 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:56.280084 1662687 pod_ready.go:94] pod "kube-scheduler-no-preload-800908" is "Ready"
	I1119 03:03:56.280115 1662687 pod_ready.go:86] duration metric: took 401.964995ms for pod "kube-scheduler-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:56.280129 1662687 pod_ready.go:40] duration metric: took 1.606162948s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:03:56.379901 1662687 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:03:56.383386 1662687 out.go:179] * Done! kubectl is now configured to use "no-preload-800908" cluster and "default" namespace by default
	I1119 03:04:00.091208 1670171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.94826581s)
	I1119 03:04:00.091280 1670171 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.917554633s)
	I1119 03:04:00.091293 1670171 api_server.go:72] duration metric: took 6.441014887s to wait for apiserver process to appear ...
	I1119 03:04:00.091299 1670171 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:04:00.091317 1670171 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 03:04:00.091696 1670171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.894146877s)
	I1119 03:04:00.092051 1670171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.511211055s)
	I1119 03:04:00.108787 1670171 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-886248 addons enable metrics-server
	
	I1119 03:04:00.117105 1670171 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 03:04:00.132270 1670171 api_server.go:141] control plane version: v1.34.1
	I1119 03:04:00.132302 1670171 api_server.go:131] duration metric: took 40.994961ms to wait for apiserver health ...
	I1119 03:04:00.132313 1670171 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:04:00.178031 1670171 system_pods.go:59] 8 kube-system pods found
	I1119 03:04:00.178146 1670171 system_pods.go:61] "coredns-66bc5c9577-wh5wb" [92363de0-8e50-45e7-84f7-8d0e20fa6d64] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 03:04:00.178192 1670171 system_pods.go:61] "etcd-newest-cni-886248" [5dc760bc-b71b-4b72-b27d-abf96ba66665] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:04:00.178224 1670171 system_pods.go:61] "kindnet-wbjgj" [baa5b1cf-5f4f-4ca9-959c-af74d9f62f83] Running
	I1119 03:04:00.178251 1670171 system_pods.go:61] "kube-apiserver-newest-cni-886248" [f48c4478-6515-4447-a2d8-bc8683421e68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 03:04:00.178289 1670171 system_pods.go:61] "kube-controller-manager-newest-cni-886248" [78d87a76-a5af-4b59-9688-1f684aa4eb86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 03:04:00.178319 1670171 system_pods.go:61] "kube-proxy-kn684" [f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 03:04:00.178364 1670171 system_pods.go:61] "kube-scheduler-newest-cni-886248" [9d4bee4f-21a5-4c71-9174-885f35f536ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:04:00.178394 1670171 system_pods.go:61] "storage-provisioner" [4b774a63-0385-4354-91d0-0f4824a9a758] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 03:04:00.178417 1670171 system_pods.go:74] duration metric: took 46.0965ms to wait for pod list to return data ...
	I1119 03:04:00.178462 1670171 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:04:00.204225 1670171 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 03:04:00.207261 1670171 addons.go:515] duration metric: took 6.55555183s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 03:04:00.208313 1670171 default_sa.go:45] found service account: "default"
	I1119 03:04:00.208398 1670171 default_sa.go:55] duration metric: took 29.912743ms for default service account to be created ...
	I1119 03:04:00.208454 1670171 kubeadm.go:587] duration metric: took 6.558172341s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 03:04:00.208502 1670171 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:04:00.237215 1670171 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:04:00.237308 1670171 node_conditions.go:123] node cpu capacity is 2
	I1119 03:04:00.237340 1670171 node_conditions.go:105] duration metric: took 28.811186ms to run NodePressure ...
	I1119 03:04:00.237382 1670171 start.go:242] waiting for startup goroutines ...
	I1119 03:04:00.237407 1670171 start.go:247] waiting for cluster config update ...
	I1119 03:04:00.237434 1670171 start.go:256] writing updated cluster config ...
	I1119 03:04:00.237945 1670171 ssh_runner.go:195] Run: rm -f paused
	I1119 03:04:00.440062 1670171 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:04:00.443326 1670171 out.go:179] * Done! kubectl is now configured to use "newest-cni-886248" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.087271675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.092978572Z" level=info msg="Running pod sandbox: kube-system/kindnet-wbjgj/POD" id=978306de-5286-4342-b431-e3c227ba835b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.093043242Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.1195832Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3892463a-f8c4-4b27-ae30-771eb94f65ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.120523513Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=978306de-5286-4342-b431-e3c227ba835b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.138976588Z" level=info msg="Ran pod sandbox 5bd27b879481c772b1d1f849f72e4f37cbc6faf8b1680002728fc29f231b289b with infra container: kube-system/kube-proxy-kn684/POD" id=3892463a-f8c4-4b27-ae30-771eb94f65ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.146241291Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8c25f2f1-f926-4320-a062-7101b2db5da3 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.147763134Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0d5a712b-f562-409b-b899-ed3b707d0918 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.150312559Z" level=info msg="Creating container: kube-system/kube-proxy-kn684/kube-proxy" id=2009cf19-3bdf-49ef-a363-7a0930b8af64 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.152252926Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.152286123Z" level=info msg="Ran pod sandbox 825c53ee89e4c089491320ebe97678466d051b9d8c323193e2646be2ac95a30e with infra container: kube-system/kindnet-wbjgj/POD" id=978306de-5286-4342-b431-e3c227ba835b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.154741478Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a0ec2727-2ce9-4876-ba78-b983dce1d416 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.157982394Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c0da5eb2-5637-423b-b894-8c56c2927df0 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.159124409Z" level=info msg="Creating container: kube-system/kindnet-wbjgj/kindnet-cni" id=7ed85e74-d746-497b-9329-a95d63033b6d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.159420515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.172461791Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.172950129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.177780268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.178469963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.266322772Z" level=info msg="Created container 972e1acc7cbb218ba9da3cd0acc60b2dc76dcbd471980cb3d49154c780725250: kube-system/kube-proxy-kn684/kube-proxy" id=2009cf19-3bdf-49ef-a363-7a0930b8af64 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.277998925Z" level=info msg="Starting container: 972e1acc7cbb218ba9da3cd0acc60b2dc76dcbd471980cb3d49154c780725250" id=da39b955-983c-4256-9812-f6a4c3598068 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.285824654Z" level=info msg="Started container" PID=1070 containerID=972e1acc7cbb218ba9da3cd0acc60b2dc76dcbd471980cb3d49154c780725250 description=kube-system/kube-proxy-kn684/kube-proxy id=da39b955-983c-4256-9812-f6a4c3598068 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5bd27b879481c772b1d1f849f72e4f37cbc6faf8b1680002728fc29f231b289b
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.302978353Z" level=info msg="Created container 6fe616ee0273c716f6e7a6fb7b7ac5a8ff750f30a0b8d2ed8d266d7ad6a45adc: kube-system/kindnet-wbjgj/kindnet-cni" id=7ed85e74-d746-497b-9329-a95d63033b6d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.306194048Z" level=info msg="Starting container: 6fe616ee0273c716f6e7a6fb7b7ac5a8ff750f30a0b8d2ed8d266d7ad6a45adc" id=9e69775d-c28b-4550-95ed-a725d6efe016 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:03:59 newest-cni-886248 crio[613]: time="2025-11-19T03:03:59.314750569Z" level=info msg="Started container" PID=1067 containerID=6fe616ee0273c716f6e7a6fb7b7ac5a8ff750f30a0b8d2ed8d266d7ad6a45adc description=kube-system/kindnet-wbjgj/kindnet-cni id=9e69775d-c28b-4550-95ed-a725d6efe016 name=/runtime.v1.RuntimeService/StartContainer sandboxID=825c53ee89e4c089491320ebe97678466d051b9d8c323193e2646be2ac95a30e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	972e1acc7cbb2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   5bd27b879481c       kube-proxy-kn684                            kube-system
	6fe616ee0273c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   825c53ee89e4c       kindnet-wbjgj                               kube-system
	e1eed48672587       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   a1f61d1ec2a87       kube-controller-manager-newest-cni-886248   kube-system
	6d128baea56ff       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   5faeba16edaea       kube-apiserver-newest-cni-886248            kube-system
	ffb0198ce7f01       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   6cdbcd70f4e9c       etcd-newest-cni-886248                      kube-system
	da8265e8e46cd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   e9a85763e408b       kube-scheduler-newest-cni-886248            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-886248
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-886248
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=newest-cni-886248
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T03_03_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 03:03:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-886248
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 03:03:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 03:03:58 +0000   Wed, 19 Nov 2025 03:03:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 03:03:58 +0000   Wed, 19 Nov 2025 03:03:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 03:03:58 +0000   Wed, 19 Nov 2025 03:03:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 19 Nov 2025 03:03:58 +0000   Wed, 19 Nov 2025 03:03:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-886248
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                aa6cbc50-f2b0-4528-80c3-566034a2d86c
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-886248                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-wbjgj                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-886248             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-886248    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-kn684                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-886248             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node newest-cni-886248 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node newest-cni-886248 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node newest-cni-886248 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-886248 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-886248 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-886248 status is now: NodeHasSufficientPID
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-886248 event: Registered Node newest-cni-886248 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x2 over 14s)  kubelet          Node newest-cni-886248 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x2 over 14s)  kubelet          Node newest-cni-886248 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x2 over 14s)  kubelet          Node newest-cni-886248 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-886248 event: Registered Node newest-cni-886248 in Controller
	
	
	==> dmesg <==
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	[Nov19 03:00] overlayfs: idmapped layers are currently not supported
	[  +8.385032] overlayfs: idmapped layers are currently not supported
	[Nov19 03:01] overlayfs: idmapped layers are currently not supported
	[  +9.842210] overlayfs: idmapped layers are currently not supported
	[Nov19 03:02] overlayfs: idmapped layers are currently not supported
	[Nov19 03:03] overlayfs: idmapped layers are currently not supported
	[ +33.377847] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ffb0198ce7f012092c5e61eeb22ee641ada1a435b1cd87da1b7ad5f0d00519fc] <==
	{"level":"warn","ts":"2025-11-19T03:03:56.602107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.666960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.740376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.753118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.773577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.805630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.821592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.849379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.878463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.932073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.953691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:56.990573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.031733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.055545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.097976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.143156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.217527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.235523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.281879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.333476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.383950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.419382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.434528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.465953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:57.535722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44876","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:04:06 up 10:46,  0 user,  load average: 6.02, 4.18, 3.08
	Linux newest-cni-886248 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6fe616ee0273c716f6e7a6fb7b7ac5a8ff750f30a0b8d2ed8d266d7ad6a45adc] <==
	I1119 03:03:59.425827       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 03:03:59.431652       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 03:03:59.434439       1 main.go:148] setting mtu 1500 for CNI 
	I1119 03:03:59.434465       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 03:03:59.434480       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T03:03:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 03:03:59.627786       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 03:03:59.627803       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 03:03:59.627820       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 03:03:59.628728       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [6d128baea56ffd78984f08dc3dc92a053e8b13d6136d8a220e0fb895c448d4be] <==
	I1119 03:03:58.525764       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 03:03:58.525771       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 03:03:58.525776       1 cache.go:39] Caches are synced for autoregister controller
	I1119 03:03:58.584835       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 03:03:58.590210       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:03:58.596822       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 03:03:58.596942       1 policy_source.go:240] refreshing policies
	I1119 03:03:58.597014       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 03:03:58.597024       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 03:03:58.597162       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 03:03:58.598057       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 03:03:58.598836       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 03:03:58.605894       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 03:03:58.800205       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 03:03:59.373442       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 03:03:59.723691       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 03:03:59.778625       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 03:03:59.806034       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 03:03:59.817141       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 03:03:59.878429       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.116.57"}
	I1119 03:03:59.907223       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.70.15"}
	I1119 03:04:01.979512       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 03:04:02.268607       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 03:04:02.414344       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 03:04:02.465250       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e1eed48672587b0f0942e9efafdc58e46f8385b96a631acf88ebc24aca51da13] <==
	I1119 03:04:01.907313       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 03:04:01.908472       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 03:04:01.909592       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 03:04:01.909670       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 03:04:01.909863       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 03:04:01.912305       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 03:04:01.913495       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 03:04:01.913578       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 03:04:01.913618       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 03:04:01.913748       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 03:04:01.915981       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 03:04:01.916409       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 03:04:01.926659       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 03:04:01.928934       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:04:01.932051       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 03:04:01.934327       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 03:04:01.935767       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 03:04:01.938294       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:04:01.941432       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 03:04:01.942647       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 03:04:01.946011       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 03:04:01.965789       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:04:01.965817       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 03:04:01.965823       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 03:04:01.966341       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [972e1acc7cbb218ba9da3cd0acc60b2dc76dcbd471980cb3d49154c780725250] <==
	I1119 03:03:59.617736       1 server_linux.go:53] "Using iptables proxy"
	I1119 03:03:59.887145       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 03:03:59.988269       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 03:03:59.988303       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 03:03:59.988394       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 03:04:00.155979       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 03:04:00.156129       1 server_linux.go:132] "Using iptables Proxier"
	I1119 03:04:00.163083       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 03:04:00.163669       1 server.go:527] "Version info" version="v1.34.1"
	I1119 03:04:00.163742       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:04:00.173609       1 config.go:200] "Starting service config controller"
	I1119 03:04:00.173760       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 03:04:00.183193       1 config.go:106] "Starting endpoint slice config controller"
	I1119 03:04:00.183236       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 03:04:00.183272       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 03:04:00.183277       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 03:04:00.184049       1 config.go:309] "Starting node config controller"
	I1119 03:04:00.184062       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 03:04:00.184068       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 03:04:00.283023       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 03:04:00.284259       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 03:04:00.284474       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [da8265e8e46cd2db7db56b1bcfe9737eace63b799347a005a9b97166455a3aff] <==
	I1119 03:03:55.800056       1 serving.go:386] Generated self-signed cert in-memory
	I1119 03:03:59.007536       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 03:03:59.007596       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:03:59.066231       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 03:03:59.066349       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 03:03:59.066373       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 03:03:59.066420       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 03:03:59.077088       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:03:59.077132       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:03:59.078726       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:03:59.078749       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:03:59.166674       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 03:03:59.183304       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:03:59.287815       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.549657     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: E1119 03:03:58.710913     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-886248\" already exists" pod="kube-system/etcd-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.710952     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: E1119 03:03:58.711111     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-886248\" already exists" pod="kube-system/kube-scheduler-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.720270     734 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.720369     734 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.720398     734 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.721419     734 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: E1119 03:03:58.744481     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-886248\" already exists" pod="kube-system/kube-apiserver-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.744508     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.752708     734 apiserver.go:52] "Watching apiserver"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: E1119 03:03:58.767404     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-886248\" already exists" pod="kube-system/kube-controller-manager-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.767538     734 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.767492     734 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: E1119 03:03:58.786878     734 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-886248\" already exists" pod="kube-system/kube-scheduler-newest-cni-886248"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.792173     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f-xtables-lock\") pod \"kube-proxy-kn684\" (UID: \"f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f\") " pod="kube-system/kube-proxy-kn684"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.792249     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/baa5b1cf-5f4f-4ca9-959c-af74d9f62f83-cni-cfg\") pod \"kindnet-wbjgj\" (UID: \"baa5b1cf-5f4f-4ca9-959c-af74d9f62f83\") " pod="kube-system/kindnet-wbjgj"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.792270     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baa5b1cf-5f4f-4ca9-959c-af74d9f62f83-lib-modules\") pod \"kindnet-wbjgj\" (UID: \"baa5b1cf-5f4f-4ca9-959c-af74d9f62f83\") " pod="kube-system/kindnet-wbjgj"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.792312     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baa5b1cf-5f4f-4ca9-959c-af74d9f62f83-xtables-lock\") pod \"kindnet-wbjgj\" (UID: \"baa5b1cf-5f4f-4ca9-959c-af74d9f62f83\") " pod="kube-system/kindnet-wbjgj"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.792335     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f-lib-modules\") pod \"kube-proxy-kn684\" (UID: \"f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f\") " pod="kube-system/kube-proxy-kn684"
	Nov 19 03:03:58 newest-cni-886248 kubelet[734]: I1119 03:03:58.822054     734 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 03:03:59 newest-cni-886248 kubelet[734]: W1119 03:03:59.146729     734 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ceb6de1b4d7ca96fef3ff191371417b9c4e247fd4589ac0d2a1c844b9532578/crio-825c53ee89e4c089491320ebe97678466d051b9d8c323193e2646be2ac95a30e WatchSource:0}: Error finding container 825c53ee89e4c089491320ebe97678466d051b9d8c323193e2646be2ac95a30e: Status 404 returned error can't find the container with id 825c53ee89e4c089491320ebe97678466d051b9d8c323193e2646be2ac95a30e
	Nov 19 03:04:01 newest-cni-886248 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 03:04:01 newest-cni-886248 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 03:04:01 newest-cni-886248 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-886248 -n newest-cni-886248
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-886248 -n newest-cni-886248: exit status 2 (456.504813ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-886248 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-wh5wb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jjfww kubernetes-dashboard-855c9754f9-jd7tx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-886248 describe pod coredns-66bc5c9577-wh5wb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jjfww kubernetes-dashboard-855c9754f9-jd7tx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-886248 describe pod coredns-66bc5c9577-wh5wb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jjfww kubernetes-dashboard-855c9754f9-jd7tx: exit status 1 (109.788828ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-wh5wb" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-jjfww" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-jd7tx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-886248 describe pod coredns-66bc5c9577-wh5wb storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jjfww kubernetes-dashboard-855c9754f9-jd7tx: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-800908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-800908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (352.828203ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:04:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-800908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-800908 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-800908 describe deploy/metrics-server -n kube-system: exit status 1 (118.964664ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-800908 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-800908
helpers_test.go:243: (dbg) docker inspect no-preload-800908:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd",
	        "Created": "2025-11-19T03:02:36.622194348Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1662992,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T03:02:36.686722322Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/hostname",
	        "HostsPath": "/var/lib/docker/containers/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/hosts",
	        "LogPath": "/var/lib/docker/containers/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd-json.log",
	        "Name": "/no-preload-800908",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-800908:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-800908",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd",
	                "LowerDir": "/var/lib/docker/overlay2/5f2a991abb1ac9e1d4f1b633bb11e2415ce1437a860a51427c5b7ab54fc65618-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f2a991abb1ac9e1d4f1b633bb11e2415ce1437a860a51427c5b7ab54fc65618/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f2a991abb1ac9e1d4f1b633bb11e2415ce1437a860a51427c5b7ab54fc65618/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f2a991abb1ac9e1d4f1b633bb11e2415ce1437a860a51427c5b7ab54fc65618/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-800908",
	                "Source": "/var/lib/docker/volumes/no-preload-800908/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-800908",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-800908",
	                "name.minikube.sigs.k8s.io": "no-preload-800908",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "015d567ad91f0108949b524b5e1680c5ebc43912ee959493ff9039c008565edd",
	            "SandboxKey": "/var/run/docker/netns/015d567ad91f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34925"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34926"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34929"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34927"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34928"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-800908": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:fd:8e:08:ae:73",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2c1e146c03dfa36d5dc32c1606b9c05b9b637b68e1e65d533d701c41873db1eb",
	                    "EndpointID": "c240eeb88abe04077e80b938afadc834961d70427dc4648c2152c477887d5f54",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-800908",
	                        "b531313c62c4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-800908 -n no-preload-800908
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-800908 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-800908 logs -n 25: (1.603163075s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-592123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │                     │
	│ stop    │ -p embed-certs-592123 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-579203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ addons  │ enable dashboard -p embed-certs-592123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:01 UTC │
	│ start   │ -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:01 UTC │ 19 Nov 25 03:02 UTC │
	│ image   │ default-k8s-diff-port-579203 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p default-k8s-diff-port-579203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p disable-driver-mounts-722439                                                                                                                                                                                                               │ disable-driver-mounts-722439 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:03 UTC │
	│ image   │ embed-certs-592123 image list --format=json                                                                                                                                                                                                   │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p embed-certs-592123 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p embed-certs-592123                                                                                                                                                                                                                         │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p embed-certs-592123                                                                                                                                                                                                                         │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:03 UTC │
	│ addons  │ enable metrics-server -p newest-cni-886248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │                     │
	│ stop    │ -p newest-cni-886248 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:03 UTC │
	│ addons  │ enable dashboard -p newest-cni-886248 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:03 UTC │
	│ start   │ -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:04 UTC │
	│ image   │ newest-cni-886248 image list --format=json                                                                                                                                                                                                    │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:04 UTC │
	│ pause   │ -p newest-cni-886248 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-800908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │                     │
	│ delete  │ -p newest-cni-886248                                                                                                                                                                                                                          │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 03:03:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 03:03:45.591904 1670171 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:03:45.592050 1670171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:03:45.592063 1670171 out.go:374] Setting ErrFile to fd 2...
	I1119 03:03:45.592081 1670171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:03:45.592410 1670171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:03:45.592842 1670171 out.go:368] Setting JSON to false
	I1119 03:03:45.593878 1670171 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38753,"bootTime":1763482673,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 03:03:45.593951 1670171 start.go:143] virtualization:  
	I1119 03:03:45.597601 1670171 out.go:179] * [newest-cni-886248] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 03:03:45.601450 1670171 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 03:03:45.601549 1670171 notify.go:221] Checking for updates...
	I1119 03:03:45.607475 1670171 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 03:03:45.610383 1670171 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:03:45.613272 1670171 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 03:03:45.616127 1670171 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 03:03:45.618961 1670171 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 03:03:45.622305 1670171 config.go:182] Loaded profile config "newest-cni-886248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:03:45.622921 1670171 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 03:03:45.642673 1670171 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 03:03:45.642800 1670171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:03:45.699691 1670171 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 03:03:45.690485624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:03:45.699792 1670171 docker.go:319] overlay module found
	I1119 03:03:45.702930 1670171 out.go:179] * Using the docker driver based on existing profile
	I1119 03:03:45.705891 1670171 start.go:309] selected driver: docker
	I1119 03:03:45.705931 1670171 start.go:930] validating driver "docker" against &{Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:03:45.706030 1670171 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 03:03:45.706748 1670171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:03:45.759617 1670171 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 03:03:45.750891827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:03:45.759984 1670171 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 03:03:45.760016 1670171 cni.go:84] Creating CNI manager for ""
	I1119 03:03:45.760074 1670171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:03:45.760121 1670171 start.go:353] cluster config:
	{Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:03:45.763259 1670171 out.go:179] * Starting "newest-cni-886248" primary control-plane node in "newest-cni-886248" cluster
	I1119 03:03:45.766086 1670171 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 03:03:45.769085 1670171 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 03:03:45.776464 1670171 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:03:45.776530 1670171 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 03:03:45.776541 1670171 cache.go:65] Caching tarball of preloaded images
	I1119 03:03:45.776564 1670171 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 03:03:45.776640 1670171 preload.go:238] Found /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1119 03:03:45.776651 1670171 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 03:03:45.776764 1670171 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/config.json ...
	I1119 03:03:45.796299 1670171 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 03:03:45.796323 1670171 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 03:03:45.796337 1670171 cache.go:243] Successfully downloaded all kic artifacts
	I1119 03:03:45.796360 1670171 start.go:360] acquireMachinesLock for newest-cni-886248: {Name:mkfb71f15fb61e4b42e0e59e9b569595aaffd1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:03:45.796418 1670171 start.go:364] duration metric: took 36.208µs to acquireMachinesLock for "newest-cni-886248"
	I1119 03:03:45.796442 1670171 start.go:96] Skipping create...Using existing machine configuration
	I1119 03:03:45.796451 1670171 fix.go:54] fixHost starting: 
	I1119 03:03:45.796704 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:45.813325 1670171 fix.go:112] recreateIfNeeded on newest-cni-886248: state=Stopped err=<nil>
	W1119 03:03:45.813357 1670171 fix.go:138] unexpected machine state, will restart: <nil>
	W1119 03:03:47.534190 1662687 node_ready.go:57] node "no-preload-800908" has "Ready":"False" status (will retry)
	W1119 03:03:49.534810 1662687 node_ready.go:57] node "no-preload-800908" has "Ready":"False" status (will retry)
	I1119 03:03:45.816577 1670171 out.go:252] * Restarting existing docker container for "newest-cni-886248" ...
	I1119 03:03:45.816672 1670171 cli_runner.go:164] Run: docker start newest-cni-886248
	I1119 03:03:46.111155 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:46.133780 1670171 kic.go:430] container "newest-cni-886248" state is running.
	I1119 03:03:46.134294 1670171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886248
	I1119 03:03:46.165832 1670171 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/config.json ...
	I1119 03:03:46.166055 1670171 machine.go:94] provisionDockerMachine start ...
	I1119 03:03:46.166113 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:46.193071 1670171 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:46.193388 1670171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34935 <nil> <nil>}
	I1119 03:03:46.193398 1670171 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 03:03:46.194175 1670171 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 03:03:49.341005 1670171 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-886248
	
	I1119 03:03:49.341030 1670171 ubuntu.go:182] provisioning hostname "newest-cni-886248"
	I1119 03:03:49.341151 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:49.359764 1670171 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:49.360071 1670171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34935 <nil> <nil>}
	I1119 03:03:49.360088 1670171 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-886248 && echo "newest-cni-886248" | sudo tee /etc/hostname
	I1119 03:03:49.511179 1670171 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-886248
	
	I1119 03:03:49.511253 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:49.530369 1670171 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:49.530682 1670171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34935 <nil> <nil>}
	I1119 03:03:49.530705 1670171 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-886248' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-886248/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-886248' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 03:03:49.669701 1670171 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 03:03:49.669771 1670171 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 03:03:49.669819 1670171 ubuntu.go:190] setting up certificates
	I1119 03:03:49.669858 1670171 provision.go:84] configureAuth start
	I1119 03:03:49.670007 1670171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886248
	I1119 03:03:49.687518 1670171 provision.go:143] copyHostCerts
	I1119 03:03:49.687594 1670171 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 03:03:49.687611 1670171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 03:03:49.687686 1670171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 03:03:49.687785 1670171 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 03:03:49.687790 1670171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 03:03:49.687815 1670171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 03:03:49.687869 1670171 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 03:03:49.687873 1670171 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 03:03:49.687907 1670171 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 03:03:49.688000 1670171 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.newest-cni-886248 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-886248]
	I1119 03:03:50.073650 1670171 provision.go:177] copyRemoteCerts
	I1119 03:03:50.073720 1670171 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 03:03:50.073771 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.092763 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:50.198820 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 03:03:50.218965 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 03:03:50.240614 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 03:03:50.257971 1670171 provision.go:87] duration metric: took 588.071039ms to configureAuth
	I1119 03:03:50.257999 1670171 ubuntu.go:206] setting minikube options for container-runtime
	I1119 03:03:50.258207 1670171 config.go:182] Loaded profile config "newest-cni-886248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:03:50.258311 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.279488 1670171 main.go:143] libmachine: Using SSH client type: native
	I1119 03:03:50.279799 1670171 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34935 <nil> <nil>}
	I1119 03:03:50.279814 1670171 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 03:03:50.612777 1670171 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 03:03:50.612824 1670171 machine.go:97] duration metric: took 4.446759249s to provisionDockerMachine
	I1119 03:03:50.612836 1670171 start.go:293] postStartSetup for "newest-cni-886248" (driver="docker")
	I1119 03:03:50.612847 1670171 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 03:03:50.612915 1670171 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 03:03:50.612971 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.630696 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:50.729202 1670171 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 03:03:50.732546 1670171 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 03:03:50.732574 1670171 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 03:03:50.732585 1670171 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 03:03:50.732638 1670171 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 03:03:50.732719 1670171 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 03:03:50.732818 1670171 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 03:03:50.740334 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:03:50.758017 1670171 start.go:296] duration metric: took 145.165054ms for postStartSetup
	I1119 03:03:50.758134 1670171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 03:03:50.758204 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.775332 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:50.874646 1670171 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 03:03:50.879898 1670171 fix.go:56] duration metric: took 5.083439603s for fixHost
	I1119 03:03:50.879932 1670171 start.go:83] releasing machines lock for "newest-cni-886248", held for 5.083490334s
	I1119 03:03:50.880002 1670171 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-886248
	I1119 03:03:50.898647 1670171 ssh_runner.go:195] Run: cat /version.json
	I1119 03:03:50.898693 1670171 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 03:03:50.898701 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.898767 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:50.918189 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:50.931779 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:51.021667 1670171 ssh_runner.go:195] Run: systemctl --version
	I1119 03:03:51.117597 1670171 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 03:03:51.158694 1670171 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 03:03:51.163262 1670171 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 03:03:51.163354 1670171 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 03:03:51.171346 1670171 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 03:03:51.171425 1670171 start.go:496] detecting cgroup driver to use...
	I1119 03:03:51.171471 1670171 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 03:03:51.171553 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 03:03:51.187671 1670171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 03:03:51.200786 1670171 docker.go:218] disabling cri-docker service (if available) ...
	I1119 03:03:51.200896 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 03:03:51.216379 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 03:03:51.229862 1670171 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 03:03:51.347667 1670171 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 03:03:51.471731 1670171 docker.go:234] disabling docker service ...
	I1119 03:03:51.471852 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 03:03:51.488567 1670171 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 03:03:51.501380 1670171 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 03:03:51.616293 1670171 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 03:03:51.750642 1670171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 03:03:51.764770 1670171 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 03:03:51.780195 1670171 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 03:03:51.780293 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.789091 1670171 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 03:03:51.789187 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.800544 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.809156 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.817934 1670171 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 03:03:51.826258 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.835386 1670171 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.843742 1670171 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:03:51.857715 1670171 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 03:03:51.865787 1670171 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 03:03:51.872921 1670171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:51.991413 1670171 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 03:03:52.174666 1670171 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 03:03:52.174809 1670171 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 03:03:52.178820 1670171 start.go:564] Will wait 60s for crictl version
	I1119 03:03:52.178905 1670171 ssh_runner.go:195] Run: which crictl
	I1119 03:03:52.182623 1670171 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 03:03:52.212784 1670171 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 03:03:52.212891 1670171 ssh_runner.go:195] Run: crio --version
	I1119 03:03:52.240641 1670171 ssh_runner.go:195] Run: crio --version
	I1119 03:03:52.274720 1670171 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 03:03:52.277698 1670171 cli_runner.go:164] Run: docker network inspect newest-cni-886248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:03:52.294004 1670171 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 03:03:52.297976 1670171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:03:52.310508 1670171 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 03:03:52.313227 1670171 kubeadm.go:884] updating cluster {Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 03:03:52.313380 1670171 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:03:52.313477 1670171 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:03:52.352180 1670171 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:03:52.352206 1670171 crio.go:433] Images already preloaded, skipping extraction
	I1119 03:03:52.352263 1670171 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:03:52.377469 1670171 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:03:52.377494 1670171 cache_images.go:86] Images are preloaded, skipping loading
	I1119 03:03:52.377502 1670171 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 03:03:52.377645 1670171 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-886248 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 03:03:52.377732 1670171 ssh_runner.go:195] Run: crio config
	I1119 03:03:52.439427 1670171 cni.go:84] Creating CNI manager for ""
	I1119 03:03:52.439453 1670171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:03:52.439477 1670171 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 03:03:52.439506 1670171 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-886248 NodeName:newest-cni-886248 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 03:03:52.439642 1670171 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-886248"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 03:03:52.439723 1670171 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 03:03:52.447553 1670171 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 03:03:52.447643 1670171 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 03:03:52.454917 1670171 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 03:03:52.467306 1670171 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 03:03:52.480129 1670171 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1119 03:03:52.493028 1670171 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 03:03:52.496761 1670171 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:03:52.506994 1670171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:52.620332 1670171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:03:52.636936 1670171 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248 for IP: 192.168.76.2
	I1119 03:03:52.636959 1670171 certs.go:195] generating shared ca certs ...
	I1119 03:03:52.636981 1670171 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:52.637113 1670171 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 03:03:52.637157 1670171 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 03:03:52.637169 1670171 certs.go:257] generating profile certs ...
	I1119 03:03:52.637256 1670171 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/client.key
	I1119 03:03:52.637329 1670171 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.key.774757e0
	I1119 03:03:52.637375 1670171 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.key
	I1119 03:03:52.637497 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 03:03:52.637676 1670171 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 03:03:52.637693 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 03:03:52.637721 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 03:03:52.637744 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 03:03:52.637782 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 03:03:52.637834 1670171 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:03:52.638422 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 03:03:52.660084 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 03:03:52.678457 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 03:03:52.695538 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 03:03:52.712623 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 03:03:52.752891 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 03:03:52.781289 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 03:03:52.800723 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/newest-cni-886248/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 03:03:52.818904 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 03:03:52.838576 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 03:03:52.859843 1670171 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 03:03:52.882715 1670171 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 03:03:52.896517 1670171 ssh_runner.go:195] Run: openssl version
	I1119 03:03:52.903148 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 03:03:52.911737 1670171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 03:03:52.915641 1670171 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 03:03:52.915745 1670171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 03:03:52.962206 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 03:03:52.970041 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 03:03:52.978260 1670171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:52.982117 1670171 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:52.982258 1670171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:03:53.023645 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 03:03:53.032824 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 03:03:53.041380 1670171 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 03:03:53.045061 1670171 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 03:03:53.045134 1670171 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 03:03:53.087163 1670171 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 03:03:53.095054 1670171 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 03:03:53.098639 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 03:03:53.140386 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 03:03:53.190466 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 03:03:53.233941 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 03:03:53.274859 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 03:03:53.318412 1670171 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 03:03:53.389564 1670171 kubeadm.go:401] StartCluster: {Name:newest-cni-886248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-886248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:03:53.389709 1670171 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 03:03:53.389803 1670171 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 03:03:53.518324 1670171 cri.go:89] found id: "e1eed48672587b0f0942e9efafdc58e46f8385b96a631acf88ebc24aca51da13"
	I1119 03:03:53.518395 1670171 cri.go:89] found id: "6d128baea56ffd78984f08dc3dc92a053e8b13d6136d8a220e0fb895c448d4be"
	I1119 03:03:53.518413 1670171 cri.go:89] found id: "ffb0198ce7f012092c5e61eeb22ee641ada1a435b1cd87da1b7ad5f0d00519fc"
	I1119 03:03:53.518429 1670171 cri.go:89] found id: "da8265e8e46cd2db7db56b1bcfe9737eace63b799347a005a9b97166455a3aff"
	I1119 03:03:53.518446 1670171 cri.go:89] found id: ""
	I1119 03:03:53.518526 1670171 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 03:03:53.565185 1670171 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:03:53Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:03:53.565278 1670171 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 03:03:53.595288 1670171 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 03:03:53.595309 1670171 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 03:03:53.595359 1670171 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 03:03:53.618818 1670171 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 03:03:53.619483 1670171 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-886248" does not appear in /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:03:53.619808 1670171 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-1463525/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-886248" cluster setting kubeconfig missing "newest-cni-886248" context setting]
	I1119 03:03:53.620351 1670171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:53.622714 1670171 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 03:03:53.641984 1670171 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1119 03:03:53.642062 1670171 kubeadm.go:602] duration metric: took 46.746271ms to restartPrimaryControlPlane
	I1119 03:03:53.642086 1670171 kubeadm.go:403] duration metric: took 252.531076ms to StartCluster
	I1119 03:03:53.642129 1670171 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:53.642225 1670171 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:03:53.649998 1670171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:03:53.650250 1670171 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:03:53.651635 1670171 config.go:182] Loaded profile config "newest-cni-886248": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:03:53.651683 1670171 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:03:53.651746 1670171 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-886248"
	I1119 03:03:53.651761 1670171 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-886248"
	W1119 03:03:53.651767 1670171 addons.go:248] addon storage-provisioner should already be in state true
	I1119 03:03:53.651788 1670171 host.go:66] Checking if "newest-cni-886248" exists ...
	I1119 03:03:53.652206 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:53.652503 1670171 addons.go:70] Setting dashboard=true in profile "newest-cni-886248"
	I1119 03:03:53.652539 1670171 addons.go:239] Setting addon dashboard=true in "newest-cni-886248"
	W1119 03:03:53.652724 1670171 addons.go:248] addon dashboard should already be in state true
	I1119 03:03:53.652762 1670171 host.go:66] Checking if "newest-cni-886248" exists ...
	I1119 03:03:53.652676 1670171 addons.go:70] Setting default-storageclass=true in profile "newest-cni-886248"
	I1119 03:03:53.652946 1670171 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-886248"
	I1119 03:03:53.653236 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:53.655486 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:53.657151 1670171 out.go:179] * Verifying Kubernetes components...
	I1119 03:03:53.660375 1670171 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:03:53.708744 1670171 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:03:53.708811 1670171 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 03:03:53.712928 1670171 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:03:53.712951 1670171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:03:53.713018 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:53.716417 1670171 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1119 03:03:52.033911 1662687 node_ready.go:57] node "no-preload-800908" has "Ready":"False" status (will retry)
	I1119 03:03:53.536114 1662687 node_ready.go:49] node "no-preload-800908" is "Ready"
	I1119 03:03:53.536140 1662687 node_ready.go:38] duration metric: took 14.505081158s for node "no-preload-800908" to be "Ready" ...
	I1119 03:03:53.536155 1662687 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:03:53.536210 1662687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:03:53.568713 1662687 api_server.go:72] duration metric: took 16.982212001s to wait for apiserver process to appear ...
	I1119 03:03:53.568735 1662687 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:03:53.568758 1662687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 03:03:53.578652 1662687 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 03:03:53.580029 1662687 api_server.go:141] control plane version: v1.34.1
	I1119 03:03:53.580053 1662687 api_server.go:131] duration metric: took 11.310896ms to wait for apiserver health ...
	I1119 03:03:53.580062 1662687 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:03:53.584040 1662687 system_pods.go:59] 8 kube-system pods found
	I1119 03:03:53.584071 1662687 system_pods.go:61] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:03:53.584078 1662687 system_pods.go:61] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:53.584085 1662687 system_pods.go:61] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:53.584089 1662687 system_pods.go:61] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:53.584094 1662687 system_pods.go:61] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:53.584099 1662687 system_pods.go:61] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:53.584103 1662687 system_pods.go:61] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:53.584109 1662687 system_pods.go:61] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:03:53.584116 1662687 system_pods.go:74] duration metric: took 4.0479ms to wait for pod list to return data ...
	I1119 03:03:53.584125 1662687 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:03:53.592683 1662687 default_sa.go:45] found service account: "default"
	I1119 03:03:53.592707 1662687 default_sa.go:55] duration metric: took 8.57618ms for default service account to be created ...
	I1119 03:03:53.592716 1662687 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 03:03:53.607009 1662687 system_pods.go:86] 8 kube-system pods found
	I1119 03:03:53.607092 1662687 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:03:53.607113 1662687 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:53.607154 1662687 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:53.607192 1662687 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:53.607211 1662687 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:53.607243 1662687 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:53.607264 1662687 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:53.607284 1662687 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:03:53.607335 1662687 retry.go:31] will retry after 244.100565ms: missing components: kube-dns
	I1119 03:03:53.868052 1662687 system_pods.go:86] 8 kube-system pods found
	I1119 03:03:53.868084 1662687 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:03:53.868091 1662687 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:53.868097 1662687 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:53.868102 1662687 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:53.868106 1662687 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:53.868110 1662687 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:53.868114 1662687 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:53.868119 1662687 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:03:53.868136 1662687 retry.go:31] will retry after 284.240962ms: missing components: kube-dns
	I1119 03:03:54.156303 1662687 system_pods.go:86] 8 kube-system pods found
	I1119 03:03:54.156335 1662687 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:03:54.156341 1662687 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:54.156349 1662687 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:54.156353 1662687 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:54.156358 1662687 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:54.156363 1662687 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:54.156367 1662687 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:54.156373 1662687 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:03:54.156387 1662687 retry.go:31] will retry after 477.419711ms: missing components: kube-dns
	I1119 03:03:54.637298 1662687 system_pods.go:86] 8 kube-system pods found
	I1119 03:03:54.637327 1662687 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Running
	I1119 03:03:54.637333 1662687 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running
	I1119 03:03:54.637337 1662687 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:03:54.637341 1662687 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running
	I1119 03:03:54.637346 1662687 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:03:54.637350 1662687 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:03:54.637354 1662687 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running
	I1119 03:03:54.637357 1662687 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Running
	I1119 03:03:54.637365 1662687 system_pods.go:126] duration metric: took 1.044642345s to wait for k8s-apps to be running ...
	I1119 03:03:54.637372 1662687 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 03:03:54.637426 1662687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:03:54.656787 1662687 system_svc.go:56] duration metric: took 19.404605ms WaitForService to wait for kubelet
	I1119 03:03:54.656811 1662687 kubeadm.go:587] duration metric: took 18.070315566s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:03:54.656830 1662687 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:03:54.665899 1662687 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:03:54.665938 1662687 node_conditions.go:123] node cpu capacity is 2
	I1119 03:03:54.665951 1662687 node_conditions.go:105] duration metric: took 9.115833ms to run NodePressure ...
	I1119 03:03:54.665963 1662687 start.go:242] waiting for startup goroutines ...
	I1119 03:03:54.665970 1662687 start.go:247] waiting for cluster config update ...
	I1119 03:03:54.665981 1662687 start.go:256] writing updated cluster config ...
	I1119 03:03:54.666261 1662687 ssh_runner.go:195] Run: rm -f paused
	I1119 03:03:54.673938 1662687 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:03:54.677320 1662687 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5gb8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.683543 1662687 pod_ready.go:94] pod "coredns-66bc5c9577-5gb8d" is "Ready"
	I1119 03:03:54.683608 1662687 pod_ready.go:86] duration metric: took 6.268752ms for pod "coredns-66bc5c9577-5gb8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.685784 1662687 pod_ready.go:83] waiting for pod "etcd-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.689860 1662687 pod_ready.go:94] pod "etcd-no-preload-800908" is "Ready"
	I1119 03:03:54.689879 1662687 pod_ready.go:86] duration metric: took 4.031581ms for pod "etcd-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.692734 1662687 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.697827 1662687 pod_ready.go:94] pod "kube-apiserver-no-preload-800908" is "Ready"
	I1119 03:03:54.697846 1662687 pod_ready.go:86] duration metric: took 5.096781ms for pod "kube-apiserver-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:54.702276 1662687 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:55.077996 1662687 pod_ready.go:94] pod "kube-controller-manager-no-preload-800908" is "Ready"
	I1119 03:03:55.078071 1662687 pod_ready.go:86] duration metric: took 375.775165ms for pod "kube-controller-manager-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:55.279496 1662687 pod_ready.go:83] waiting for pod "kube-proxy-59bnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:53.719259 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 03:03:53.719292 1670171 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 03:03:53.719354 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:53.722356 1670171 addons.go:239] Setting addon default-storageclass=true in "newest-cni-886248"
	W1119 03:03:53.722383 1670171 addons.go:248] addon default-storageclass should already be in state true
	I1119 03:03:53.722410 1670171 host.go:66] Checking if "newest-cni-886248" exists ...
	I1119 03:03:53.722888 1670171 cli_runner.go:164] Run: docker container inspect newest-cni-886248 --format={{.State.Status}}
	I1119 03:03:53.746132 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:53.774883 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:53.781432 1670171 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:03:53.781453 1670171 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:03:53.781566 1670171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-886248
	I1119 03:03:53.808908 1670171 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34935 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/newest-cni-886248/id_rsa Username:docker}
	I1119 03:03:54.126795 1670171 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:03:54.142862 1670171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:03:54.173563 1670171 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:03:54.173687 1670171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:03:54.197448 1670171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:03:54.220431 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 03:03:54.220509 1670171 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 03:03:54.310138 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 03:03:54.310212 1670171 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 03:03:54.410928 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 03:03:54.410999 1670171 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 03:03:54.466241 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 03:03:54.466309 1670171 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 03:03:54.502112 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 03:03:54.502185 1670171 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 03:03:54.516936 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 03:03:54.517016 1670171 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 03:03:54.532280 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 03:03:54.532353 1670171 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 03:03:54.550862 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 03:03:54.550923 1670171 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 03:03:54.566140 1670171 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:03:54.566211 1670171 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 03:03:54.580770 1670171 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:03:55.678128 1662687 pod_ready.go:94] pod "kube-proxy-59bnq" is "Ready"
	I1119 03:03:55.678202 1662687 pod_ready.go:86] duration metric: took 398.633383ms for pod "kube-proxy-59bnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:55.878078 1662687 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:56.280084 1662687 pod_ready.go:94] pod "kube-scheduler-no-preload-800908" is "Ready"
	I1119 03:03:56.280115 1662687 pod_ready.go:86] duration metric: took 401.964995ms for pod "kube-scheduler-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:03:56.280129 1662687 pod_ready.go:40] duration metric: took 1.606162948s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:03:56.379901 1662687 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:03:56.383386 1662687 out.go:179] * Done! kubectl is now configured to use "no-preload-800908" cluster and "default" namespace by default
	I1119 03:04:00.091208 1670171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.94826581s)
	I1119 03:04:00.091280 1670171 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.917554633s)
	I1119 03:04:00.091293 1670171 api_server.go:72] duration metric: took 6.441014887s to wait for apiserver process to appear ...
	I1119 03:04:00.091299 1670171 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:04:00.091317 1670171 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 03:04:00.091696 1670171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.894146877s)
	I1119 03:04:00.092051 1670171 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.511211055s)
	I1119 03:04:00.108787 1670171 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-886248 addons enable metrics-server
	
	I1119 03:04:00.117105 1670171 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 03:04:00.132270 1670171 api_server.go:141] control plane version: v1.34.1
	I1119 03:04:00.132302 1670171 api_server.go:131] duration metric: took 40.994961ms to wait for apiserver health ...
	I1119 03:04:00.132313 1670171 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:04:00.178031 1670171 system_pods.go:59] 8 kube-system pods found
	I1119 03:04:00.178146 1670171 system_pods.go:61] "coredns-66bc5c9577-wh5wb" [92363de0-8e50-45e7-84f7-8d0e20fa6d64] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 03:04:00.178192 1670171 system_pods.go:61] "etcd-newest-cni-886248" [5dc760bc-b71b-4b72-b27d-abf96ba66665] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:04:00.178224 1670171 system_pods.go:61] "kindnet-wbjgj" [baa5b1cf-5f4f-4ca9-959c-af74d9f62f83] Running
	I1119 03:04:00.178251 1670171 system_pods.go:61] "kube-apiserver-newest-cni-886248" [f48c4478-6515-4447-a2d8-bc8683421e68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 03:04:00.178289 1670171 system_pods.go:61] "kube-controller-manager-newest-cni-886248" [78d87a76-a5af-4b59-9688-1f684aa4eb86] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 03:04:00.178319 1670171 system_pods.go:61] "kube-proxy-kn684" [f1bf5af1-d3b3-4b9b-9d12-1f94f4b89a2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 03:04:00.178364 1670171 system_pods.go:61] "kube-scheduler-newest-cni-886248" [9d4bee4f-21a5-4c71-9174-885f35f536ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:04:00.178394 1670171 system_pods.go:61] "storage-provisioner" [4b774a63-0385-4354-91d0-0f4824a9a758] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 03:04:00.178417 1670171 system_pods.go:74] duration metric: took 46.0965ms to wait for pod list to return data ...
	I1119 03:04:00.178462 1670171 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:04:00.204225 1670171 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 03:04:00.207261 1670171 addons.go:515] duration metric: took 6.55555183s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 03:04:00.208313 1670171 default_sa.go:45] found service account: "default"
	I1119 03:04:00.208398 1670171 default_sa.go:55] duration metric: took 29.912743ms for default service account to be created ...
	I1119 03:04:00.208454 1670171 kubeadm.go:587] duration metric: took 6.558172341s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 03:04:00.208502 1670171 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:04:00.237215 1670171 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:04:00.237308 1670171 node_conditions.go:123] node cpu capacity is 2
	I1119 03:04:00.237340 1670171 node_conditions.go:105] duration metric: took 28.811186ms to run NodePressure ...
	I1119 03:04:00.237382 1670171 start.go:242] waiting for startup goroutines ...
	I1119 03:04:00.237407 1670171 start.go:247] waiting for cluster config update ...
	I1119 03:04:00.237434 1670171 start.go:256] writing updated cluster config ...
	I1119 03:04:00.237945 1670171 ssh_runner.go:195] Run: rm -f paused
	I1119 03:04:00.440062 1670171 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:04:00.443326 1670171 out.go:179] * Done! kubectl is now configured to use "newest-cni-886248" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 03:03:53 no-preload-800908 crio[837]: time="2025-11-19T03:03:53.877108973Z" level=info msg="Created container ee8bf30d7725f1ce70588db27bacf2881e504796ad6daa4932903045d5d344e7: kube-system/storage-provisioner/storage-provisioner" id=f9db54ac-fefe-437a-91a7-d8b5c6232333 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:53 no-preload-800908 crio[837]: time="2025-11-19T03:03:53.878357077Z" level=info msg="Starting container: ee8bf30d7725f1ce70588db27bacf2881e504796ad6daa4932903045d5d344e7" id=ce332690-9a21-4f2e-9aa9-5b161d4ace79 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:03:53 no-preload-800908 crio[837]: time="2025-11-19T03:03:53.882619503Z" level=info msg="Started container" PID=2483 containerID=ee8bf30d7725f1ce70588db27bacf2881e504796ad6daa4932903045d5d344e7 description=kube-system/storage-provisioner/storage-provisioner id=ce332690-9a21-4f2e-9aa9-5b161d4ace79 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f74c1e2a9629eb638a8b4c0fa66bad856209be7c4da0844ac27eb67851962a21
	Nov 19 03:03:57 no-preload-800908 crio[837]: time="2025-11-19T03:03:57.270722734Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3a589bd8-3f6e-4dfe-add1-adb759a52fc9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:57 no-preload-800908 crio[837]: time="2025-11-19T03:03:57.270817714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:57 no-preload-800908 crio[837]: time="2025-11-19T03:03:57.276206967Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:11378f3b85c7bbbf4b9f2cfb0b4ba45741fba35d67478c88a1f5132799c48520 UID:17120236-6096-4228-9230-9e5ac80c0aaf NetNS:/var/run/netns/65f3defc-e035-4c13-b167-70ed0137e11c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012bca70}] Aliases:map[]}"
	Nov 19 03:03:57 no-preload-800908 crio[837]: time="2025-11-19T03:03:57.276259142Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 03:03:57 no-preload-800908 crio[837]: time="2025-11-19T03:03:57.301023741Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:11378f3b85c7bbbf4b9f2cfb0b4ba45741fba35d67478c88a1f5132799c48520 UID:17120236-6096-4228-9230-9e5ac80c0aaf NetNS:/var/run/netns/65f3defc-e035-4c13-b167-70ed0137e11c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012bca70}] Aliases:map[]}"
	Nov 19 03:03:57 no-preload-800908 crio[837]: time="2025-11-19T03:03:57.301170822Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 03:03:57 no-preload-800908 crio[837]: time="2025-11-19T03:03:57.303909533Z" level=info msg="Ran pod sandbox 11378f3b85c7bbbf4b9f2cfb0b4ba45741fba35d67478c88a1f5132799c48520 with infra container: default/busybox/POD" id=3a589bd8-3f6e-4dfe-add1-adb759a52fc9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 03:03:57 no-preload-800908 crio[837]: time="2025-11-19T03:03:57.314866249Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3167aaf8-6db2-4c4e-93cf-8fa8cad8a8f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:57 no-preload-800908 crio[837]: time="2025-11-19T03:03:57.315852674Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3167aaf8-6db2-4c4e-93cf-8fa8cad8a8f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:57 no-preload-800908 crio[837]: time="2025-11-19T03:03:57.315990171Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3167aaf8-6db2-4c4e-93cf-8fa8cad8a8f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:57 no-preload-800908 crio[837]: time="2025-11-19T03:03:57.31674672Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6dda1857-80e9-4f8a-8144-d8f1789a6958 name=/runtime.v1.ImageService/PullImage
	Nov 19 03:03:57 no-preload-800908 crio[837]: time="2025-11-19T03:03:57.318973438Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 03:03:59 no-preload-800908 crio[837]: time="2025-11-19T03:03:59.331523373Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=6dda1857-80e9-4f8a-8144-d8f1789a6958 name=/runtime.v1.ImageService/PullImage
	Nov 19 03:03:59 no-preload-800908 crio[837]: time="2025-11-19T03:03:59.33257716Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7ce40ded-b0c3-4335-8ec0-b1369dc88ccd name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:59 no-preload-800908 crio[837]: time="2025-11-19T03:03:59.338105658Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e70fe6ba-9c7d-40af-ad7f-dd53c206ff9f name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:03:59 no-preload-800908 crio[837]: time="2025-11-19T03:03:59.347507366Z" level=info msg="Creating container: default/busybox/busybox" id=b913712a-17aa-4df9-8ebd-34442e6cbeeb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:59 no-preload-800908 crio[837]: time="2025-11-19T03:03:59.347956682Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 no-preload-800908 crio[837]: time="2025-11-19T03:03:59.356736607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 no-preload-800908 crio[837]: time="2025-11-19T03:03:59.357364782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:03:59 no-preload-800908 crio[837]: time="2025-11-19T03:03:59.387204214Z" level=info msg="Created container c613ec294d8e0262f93b6bb0b152848ca1a82be3b05c67e95591c66470d7803f: default/busybox/busybox" id=b913712a-17aa-4df9-8ebd-34442e6cbeeb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:03:59 no-preload-800908 crio[837]: time="2025-11-19T03:03:59.3909027Z" level=info msg="Starting container: c613ec294d8e0262f93b6bb0b152848ca1a82be3b05c67e95591c66470d7803f" id=40b318fb-0f71-4102-a92f-39f7e768e1df name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:03:59 no-preload-800908 crio[837]: time="2025-11-19T03:03:59.395257136Z" level=info msg="Started container" PID=2530 containerID=c613ec294d8e0262f93b6bb0b152848ca1a82be3b05c67e95591c66470d7803f description=default/busybox/busybox id=40b318fb-0f71-4102-a92f-39f7e768e1df name=/runtime.v1.RuntimeService/StartContainer sandboxID=11378f3b85c7bbbf4b9f2cfb0b4ba45741fba35d67478c88a1f5132799c48520
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c613ec294d8e0       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   11378f3b85c7b       busybox                                     default
	ee8bf30d7725f       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   f74c1e2a9629e       storage-provisioner                         kube-system
	82a29b8504f8c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      15 seconds ago      Running             coredns                   0                   fc72e609cd754       coredns-66bc5c9577-5gb8d                    kube-system
	1d2185f479cda       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    26 seconds ago      Running             kindnet-cni               0                   1ba3ac8cb6d58       kindnet-hcdj9                               kube-system
	24860ffdbcd83       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      31 seconds ago      Running             kube-proxy                0                   520a2f8d0af4a       kube-proxy-59bnq                            kube-system
	c9f2a1f048283       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      49 seconds ago      Running             kube-controller-manager   0                   3023e887d0c72       kube-controller-manager-no-preload-800908   kube-system
	e1c5790d4ce2e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      49 seconds ago      Running             kube-scheduler            0                   ba6a4311e39c5       kube-scheduler-no-preload-800908            kube-system
	9ab95d2ed11da       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      49 seconds ago      Running             kube-apiserver            0                   b3bd2d9c4dbf7       kube-apiserver-no-preload-800908            kube-system
	f069fd6f0e7e4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      49 seconds ago      Running             etcd                      0                   eae77745c3bfd       etcd-no-preload-800908                      kube-system
	
	
	==> coredns [82a29b8504f8cae749e356e5ac23ab5a5231178af14d48ed7c40de97b2992408] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57101 - 54745 "HINFO IN 7135011036116975915.6503964193811261746. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00566611s
	
	
	==> describe nodes <==
	Name:               no-preload-800908
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-800908
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=no-preload-800908
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T03_03_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 03:03:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-800908
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 03:04:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 03:04:01 +0000   Wed, 19 Nov 2025 03:03:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 03:04:01 +0000   Wed, 19 Nov 2025 03:03:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 03:04:01 +0000   Wed, 19 Nov 2025 03:03:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 03:04:01 +0000   Wed, 19 Nov 2025 03:03:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-800908
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                792d2464-6007-420a-8ab8-fddc03078e19
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-5gb8d                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     32s
	  kube-system                 etcd-no-preload-800908                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         37s
	  kube-system                 kindnet-hcdj9                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-no-preload-800908             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-no-preload-800908    200m (10%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-59bnq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-no-preload-800908             100m (5%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 31s                kube-proxy       
	  Warning  CgroupV1                 50s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node no-preload-800908 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node no-preload-800908 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node no-preload-800908 status is now: NodeHasSufficientPID
	  Normal   Starting                 38s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 38s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  37s                kubelet          Node no-preload-800908 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    37s                kubelet          Node no-preload-800908 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     37s                kubelet          Node no-preload-800908 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           33s                node-controller  Node no-preload-800908 event: Registered Node no-preload-800908 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-800908 status is now: NodeReady
	
	
	==> dmesg <==
	[ +25.528121] overlayfs: idmapped layers are currently not supported
	[ +11.329962] overlayfs: idmapped layers are currently not supported
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	[Nov19 03:00] overlayfs: idmapped layers are currently not supported
	[  +8.385032] overlayfs: idmapped layers are currently not supported
	[Nov19 03:01] overlayfs: idmapped layers are currently not supported
	[  +9.842210] overlayfs: idmapped layers are currently not supported
	[Nov19 03:02] overlayfs: idmapped layers are currently not supported
	[Nov19 03:03] overlayfs: idmapped layers are currently not supported
	[ +33.377847] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f069fd6f0e7e4e28b3141d89e6311df10bfbf6d33fff2364c2063dfe1778651a] <==
	{"level":"warn","ts":"2025-11-19T03:03:26.193068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.237063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.309997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.345732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.390691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.433212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.482441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.524494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.551518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.590182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.623617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.636522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.656815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.682411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.704140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.730197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.754585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.772306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.840424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.854252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.876903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.903392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.918141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:26.933652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:03:27.051521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42810","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:04:08 up 10:46,  0 user,  load average: 6.02, 4.18, 3.08
	Linux no-preload-800908 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1d2185f479cda59b174240e1f8af59df220bcba85632ed91ba45ebdc022eab0e] <==
	I1119 03:03:42.528899       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 03:03:42.529307       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 03:03:42.529474       1 main.go:148] setting mtu 1500 for CNI 
	I1119 03:03:42.529667       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 03:03:42.529709       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T03:03:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 03:03:42.734378       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 03:03:42.734608       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 03:03:42.734655       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 03:03:42.735689       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 03:03:43.035016       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 03:03:43.035043       1 metrics.go:72] Registering metrics
	I1119 03:03:43.035118       1 controller.go:711] "Syncing nftables rules"
	I1119 03:03:52.734210       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 03:03:52.734310       1 main.go:301] handling current node
	I1119 03:04:02.733585       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 03:04:02.733726       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9ab95d2ed11da9c07d5f8714d0cb9616928489e24b9b429adecb3a9b32e84e78] <==
	I1119 03:03:28.218432       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 03:03:28.225229       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:03:28.225259       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 03:03:28.262118       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:03:28.262308       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1119 03:03:28.276256       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1119 03:03:28.496490       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 03:03:28.888055       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 03:03:28.903814       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 03:03:28.903838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 03:03:29.946158       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 03:03:30.020039       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 03:03:30.143239       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 03:03:30.232010       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 03:03:30.291113       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1119 03:03:30.293178       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 03:03:30.306906       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 03:03:31.032397       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 03:03:31.185289       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 03:03:31.248857       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 03:03:35.396586       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:03:35.404009       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 03:03:35.940348       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 03:03:36.113196       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1119 03:04:06.829771       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:35882: use of closed network connection
	
	
	==> kube-controller-manager [c9f2a1f0482839be6c102a4cd064799d5bb1fed6535f049a7e0fe4b2a291d1f5] <==
	I1119 03:03:35.185868       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 03:03:35.187524       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 03:03:35.188489       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 03:03:35.188600       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 03:03:35.190831       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:03:35.193342       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 03:03:35.194521       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 03:03:35.195663       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 03:03:35.196783       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 03:03:35.196829       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 03:03:35.196849       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 03:03:35.196854       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 03:03:35.196860       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 03:03:35.200948       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 03:03:35.202365       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 03:03:35.206988       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-800908" podCIDRs=["10.244.0.0/24"]
	I1119 03:03:35.208548       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 03:03:35.225770       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 03:03:35.235672       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 03:03:35.237895       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:03:35.237916       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 03:03:35.237923       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 03:03:35.238070       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 03:03:35.241611       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:03:55.190256       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [24860ffdbcd83764bffe5422dfc6a48a4f2b7a36936fd13465bbe5238cd8f79a] <==
	I1119 03:03:36.948586       1 server_linux.go:53] "Using iptables proxy"
	I1119 03:03:37.097496       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 03:03:37.231797       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 03:03:37.231828       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 03:03:37.231896       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 03:03:37.360089       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 03:03:37.360140       1 server_linux.go:132] "Using iptables Proxier"
	I1119 03:03:37.382431       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 03:03:37.382800       1 server.go:527] "Version info" version="v1.34.1"
	I1119 03:03:37.382825       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:03:37.384289       1 config.go:200] "Starting service config controller"
	I1119 03:03:37.384316       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 03:03:37.384334       1 config.go:106] "Starting endpoint slice config controller"
	I1119 03:03:37.384339       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 03:03:37.384353       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 03:03:37.384356       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 03:03:37.385037       1 config.go:309] "Starting node config controller"
	I1119 03:03:37.385051       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 03:03:37.385057       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 03:03:37.485145       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 03:03:37.485179       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 03:03:37.485225       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e1c5790d4ce2e15b0287cd336930456a61b083b522ec079d8259479dcac66db1] <==
	E1119 03:03:28.220455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 03:03:28.220517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 03:03:28.220646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 03:03:28.220710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 03:03:28.220764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 03:03:28.220823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 03:03:28.220877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 03:03:28.220982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 03:03:28.221054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 03:03:28.221104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 03:03:29.045362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 03:03:29.045526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 03:03:29.077887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 03:03:29.109981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 03:03:29.121785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 03:03:29.202851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 03:03:29.244137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 03:03:29.266769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 03:03:29.284768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1119 03:03:29.378079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 03:03:29.385872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 03:03:29.422257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 03:03:29.430003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 03:03:29.437665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1119 03:03:31.875458       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 03:03:34 no-preload-800908 kubelet[1998]: I1119 03:03:34.470008    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-800908" podStartSLOduration=3.469988789 podStartE2EDuration="3.469988789s" podCreationTimestamp="2025-11-19 03:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:03:32.61829433 +0000 UTC m=+1.731835996" watchObservedRunningTime="2025-11-19 03:03:34.469988789 +0000 UTC m=+3.583530431"
	Nov 19 03:03:35 no-preload-800908 kubelet[1998]: I1119 03:03:35.260587    1998 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 03:03:35 no-preload-800908 kubelet[1998]: I1119 03:03:35.261432    1998 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 03:03:36 no-preload-800908 kubelet[1998]: I1119 03:03:36.174468    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pv46\" (UniqueName: \"kubernetes.io/projected/6b6ee3ab-c31d-447c-895b-d341732cb482-kube-api-access-7pv46\") pod \"kube-proxy-59bnq\" (UID: \"6b6ee3ab-c31d-447c-895b-d341732cb482\") " pod="kube-system/kube-proxy-59bnq"
	Nov 19 03:03:36 no-preload-800908 kubelet[1998]: I1119 03:03:36.175105    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc9e982d-8e14-47c6-a9a3-a4502602caa4-xtables-lock\") pod \"kindnet-hcdj9\" (UID: \"dc9e982d-8e14-47c6-a9a3-a4502602caa4\") " pod="kube-system/kindnet-hcdj9"
	Nov 19 03:03:36 no-preload-800908 kubelet[1998]: I1119 03:03:36.175288    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg56t\" (UniqueName: \"kubernetes.io/projected/dc9e982d-8e14-47c6-a9a3-a4502602caa4-kube-api-access-sg56t\") pod \"kindnet-hcdj9\" (UID: \"dc9e982d-8e14-47c6-a9a3-a4502602caa4\") " pod="kube-system/kindnet-hcdj9"
	Nov 19 03:03:36 no-preload-800908 kubelet[1998]: I1119 03:03:36.175421    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b6ee3ab-c31d-447c-895b-d341732cb482-xtables-lock\") pod \"kube-proxy-59bnq\" (UID: \"6b6ee3ab-c31d-447c-895b-d341732cb482\") " pod="kube-system/kube-proxy-59bnq"
	Nov 19 03:03:36 no-preload-800908 kubelet[1998]: I1119 03:03:36.175543    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc9e982d-8e14-47c6-a9a3-a4502602caa4-lib-modules\") pod \"kindnet-hcdj9\" (UID: \"dc9e982d-8e14-47c6-a9a3-a4502602caa4\") " pod="kube-system/kindnet-hcdj9"
	Nov 19 03:03:36 no-preload-800908 kubelet[1998]: I1119 03:03:36.175666    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b6ee3ab-c31d-447c-895b-d341732cb482-kube-proxy\") pod \"kube-proxy-59bnq\" (UID: \"6b6ee3ab-c31d-447c-895b-d341732cb482\") " pod="kube-system/kube-proxy-59bnq"
	Nov 19 03:03:36 no-preload-800908 kubelet[1998]: I1119 03:03:36.175799    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b6ee3ab-c31d-447c-895b-d341732cb482-lib-modules\") pod \"kube-proxy-59bnq\" (UID: \"6b6ee3ab-c31d-447c-895b-d341732cb482\") " pod="kube-system/kube-proxy-59bnq"
	Nov 19 03:03:36 no-preload-800908 kubelet[1998]: I1119 03:03:36.175919    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dc9e982d-8e14-47c6-a9a3-a4502602caa4-cni-cfg\") pod \"kindnet-hcdj9\" (UID: \"dc9e982d-8e14-47c6-a9a3-a4502602caa4\") " pod="kube-system/kindnet-hcdj9"
	Nov 19 03:03:36 no-preload-800908 kubelet[1998]: I1119 03:03:36.295040    1998 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 19 03:03:36 no-preload-800908 kubelet[1998]: W1119 03:03:36.356942    1998 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/crio-1ba3ac8cb6d5830651a7da5a886730a6dff1a41bb6d324e0c02b8db8e2050284 WatchSource:0}: Error finding container 1ba3ac8cb6d5830651a7da5a886730a6dff1a41bb6d324e0c02b8db8e2050284: Status 404 returned error can't find the container with id 1ba3ac8cb6d5830651a7da5a886730a6dff1a41bb6d324e0c02b8db8e2050284
	Nov 19 03:03:40 no-preload-800908 kubelet[1998]: I1119 03:03:40.090807    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-59bnq" podStartSLOduration=5.090790579 podStartE2EDuration="5.090790579s" podCreationTimestamp="2025-11-19 03:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:03:37.312314493 +0000 UTC m=+6.425856127" watchObservedRunningTime="2025-11-19 03:03:40.090790579 +0000 UTC m=+9.204332213"
	Nov 19 03:03:53 no-preload-800908 kubelet[1998]: I1119 03:03:53.128555    1998 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 03:03:53 no-preload-800908 kubelet[1998]: I1119 03:03:53.169839    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hcdj9" podStartSLOduration=12.256749144 podStartE2EDuration="18.169821997s" podCreationTimestamp="2025-11-19 03:03:35 +0000 UTC" firstStartedPulling="2025-11-19 03:03:36.360834617 +0000 UTC m=+5.474376259" lastFinishedPulling="2025-11-19 03:03:42.273907478 +0000 UTC m=+11.387449112" observedRunningTime="2025-11-19 03:03:43.365426794 +0000 UTC m=+12.478968444" watchObservedRunningTime="2025-11-19 03:03:53.169821997 +0000 UTC m=+22.283363639"
	Nov 19 03:03:53 no-preload-800908 kubelet[1998]: I1119 03:03:53.334382    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmfvq\" (UniqueName: \"kubernetes.io/projected/f2cf06c3-a27f-4205-bf83-035adba73690-kube-api-access-wmfvq\") pod \"coredns-66bc5c9577-5gb8d\" (UID: \"f2cf06c3-a27f-4205-bf83-035adba73690\") " pod="kube-system/coredns-66bc5c9577-5gb8d"
	Nov 19 03:03:53 no-preload-800908 kubelet[1998]: I1119 03:03:53.334599    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/41c9b9d6-c070-4f5d-92ec-e0f2baf1609d-tmp\") pod \"storage-provisioner\" (UID: \"41c9b9d6-c070-4f5d-92ec-e0f2baf1609d\") " pod="kube-system/storage-provisioner"
	Nov 19 03:03:53 no-preload-800908 kubelet[1998]: I1119 03:03:53.334698    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtqjm\" (UniqueName: \"kubernetes.io/projected/41c9b9d6-c070-4f5d-92ec-e0f2baf1609d-kube-api-access-rtqjm\") pod \"storage-provisioner\" (UID: \"41c9b9d6-c070-4f5d-92ec-e0f2baf1609d\") " pod="kube-system/storage-provisioner"
	Nov 19 03:03:53 no-preload-800908 kubelet[1998]: I1119 03:03:53.334793    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2cf06c3-a27f-4205-bf83-035adba73690-config-volume\") pod \"coredns-66bc5c9577-5gb8d\" (UID: \"f2cf06c3-a27f-4205-bf83-035adba73690\") " pod="kube-system/coredns-66bc5c9577-5gb8d"
	Nov 19 03:03:53 no-preload-800908 kubelet[1998]: W1119 03:03:53.528433    1998 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/crio-fc72e609cd75452f3a4f3deb2e06960da5bac78779c90714a64a4a134d4c531d WatchSource:0}: Error finding container fc72e609cd75452f3a4f3deb2e06960da5bac78779c90714a64a4a134d4c531d: Status 404 returned error can't find the container with id fc72e609cd75452f3a4f3deb2e06960da5bac78779c90714a64a4a134d4c531d
	Nov 19 03:03:53 no-preload-800908 kubelet[1998]: W1119 03:03:53.807950    1998 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/crio-f74c1e2a9629eb638a8b4c0fa66bad856209be7c4da0844ac27eb67851962a21 WatchSource:0}: Error finding container f74c1e2a9629eb638a8b4c0fa66bad856209be7c4da0844ac27eb67851962a21: Status 404 returned error can't find the container with id f74c1e2a9629eb638a8b4c0fa66bad856209be7c4da0844ac27eb67851962a21
	Nov 19 03:03:54 no-preload-800908 kubelet[1998]: I1119 03:03:54.408170    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.408151402 podStartE2EDuration="15.408151402s" podCreationTimestamp="2025-11-19 03:03:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:03:54.372165176 +0000 UTC m=+23.485706826" watchObservedRunningTime="2025-11-19 03:03:54.408151402 +0000 UTC m=+23.521693052"
	Nov 19 03:03:56 no-preload-800908 kubelet[1998]: I1119 03:03:56.657190    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5gb8d" podStartSLOduration=20.657169917 podStartE2EDuration="20.657169917s" podCreationTimestamp="2025-11-19 03:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 03:03:54.426790408 +0000 UTC m=+23.540332058" watchObservedRunningTime="2025-11-19 03:03:56.657169917 +0000 UTC m=+25.770711559"
	Nov 19 03:03:56 no-preload-800908 kubelet[1998]: I1119 03:03:56.863722    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t8rd\" (UniqueName: \"kubernetes.io/projected/17120236-6096-4228-9230-9e5ac80c0aaf-kube-api-access-5t8rd\") pod \"busybox\" (UID: \"17120236-6096-4228-9230-9e5ac80c0aaf\") " pod="default/busybox"
	
	
	==> storage-provisioner [ee8bf30d7725f1ce70588db27bacf2881e504796ad6daa4932903045d5d344e7] <==
	I1119 03:03:53.911620       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 03:03:53.942394       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 03:03:53.942522       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 03:03:53.945325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:03:53.955805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:03:53.956059       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 03:03:53.956267       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-800908_95cf8396-a153-45f8-a86a-07adc0555cd6!
	I1119 03:03:53.956358       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba963751-4855-448d-b28c-3b35fd351123", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-800908_95cf8396-a153-45f8-a86a-07adc0555cd6 became leader
	W1119 03:03:53.962521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:03:53.975822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:03:54.057271       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-800908_95cf8396-a153-45f8-a86a-07adc0555cd6!
	W1119 03:03:55.978957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:03:55.984196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:03:57.987878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:03:57.997601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:04:00.000546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:04:00.007759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:04:02.011517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:04:02.016807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:04:04.021279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:04:04.029720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:04:06.033878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:04:06.040345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:04:08.043825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:04:08.052603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-800908 -n no-preload-800908
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-800908 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-800908 --alsologtostderr -v=1
E1119 03:05:29.009968 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-800908 --alsologtostderr -v=1: exit status 80 (2.581643254s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-800908 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 03:05:27.310975 1678792 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:05:27.311405 1678792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:05:27.311411 1678792 out.go:374] Setting ErrFile to fd 2...
	I1119 03:05:27.311416 1678792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:05:27.311839 1678792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:05:27.312179 1678792 out.go:368] Setting JSON to false
	I1119 03:05:27.312199 1678792 mustload.go:66] Loading cluster: no-preload-800908
	I1119 03:05:27.312921 1678792 config.go:182] Loaded profile config "no-preload-800908": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:05:27.313779 1678792 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:05:27.334251 1678792 host.go:66] Checking if "no-preload-800908" exists ...
	I1119 03:05:27.334582 1678792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:05:27.396181 1678792 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 03:05:27.386873643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:05:27.396839 1678792 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-800908 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 03:05:27.400393 1678792 out.go:179] * Pausing node no-preload-800908 ... 
	I1119 03:05:27.403301 1678792 host.go:66] Checking if "no-preload-800908" exists ...
	I1119 03:05:27.403663 1678792 ssh_runner.go:195] Run: systemctl --version
	I1119 03:05:27.403713 1678792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:05:27.422035 1678792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:05:27.528223 1678792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:05:27.549617 1678792 pause.go:52] kubelet running: true
	I1119 03:05:27.549684 1678792 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 03:05:27.797402 1678792 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 03:05:27.797550 1678792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 03:05:27.866957 1678792 cri.go:89] found id: "5c44fc33f2c6f591d084800a048552c1fe51bc9a96b100574aab26d266ae2d23"
	I1119 03:05:27.866985 1678792 cri.go:89] found id: "9aebe2a3b6de103c7f8b9e2bd8f80a05a5befe8af91661090dd46344cae6c829"
	I1119 03:05:27.866990 1678792 cri.go:89] found id: "8aba6b7c7be445a2875873b755efd1399e985179e6a913cc3aefc480b738613c"
	I1119 03:05:27.866993 1678792 cri.go:89] found id: "a4b14efb5df254be991154d1dfd68e56342ac94b3a3a071d5cdf8aa75b5e2b0a"
	I1119 03:05:27.866997 1678792 cri.go:89] found id: "c47e30a501ed736547bbb4377e6df1e33a7226c1b2c94803f55b4e972ff18abd"
	I1119 03:05:27.867001 1678792 cri.go:89] found id: "1d586c7c3109fd5ba0aba02ff22f254bea2462e97b24f5d3f134dc24d068e0e6"
	I1119 03:05:27.867004 1678792 cri.go:89] found id: "bb7e2b0cb0cd02d62ac7ad2c37fe309260d9fcd24b72ccd2af687c7b1dcc6ec5"
	I1119 03:05:27.867007 1678792 cri.go:89] found id: "d72640f599edb0a7cc747d54663105ae5e186229c7ab646168a63821cf3e3666"
	I1119 03:05:27.867010 1678792 cri.go:89] found id: "1c5a8ad5bc6a5d13b6cef75a968c097e0e15feaca2933a332cc62792968879fc"
	I1119 03:05:27.867046 1678792 cri.go:89] found id: "1a78dba8bce368677aa036142ffa9608c3867766e29fb2e1011d917c5d6f239f"
	I1119 03:05:27.867058 1678792 cri.go:89] found id: "c867c5f07d95fb9e228a76ad97bd7ec2f39291ceef9462dfcb386be776ad518c"
	I1119 03:05:27.867062 1678792 cri.go:89] found id: ""
	I1119 03:05:27.867120 1678792 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 03:05:27.883847 1678792 retry.go:31] will retry after 325.763991ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:05:27Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:05:28.210128 1678792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:05:28.223060 1678792 pause.go:52] kubelet running: false
	I1119 03:05:28.223126 1678792 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 03:05:28.394821 1678792 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 03:05:28.394912 1678792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 03:05:28.468531 1678792 cri.go:89] found id: "5c44fc33f2c6f591d084800a048552c1fe51bc9a96b100574aab26d266ae2d23"
	I1119 03:05:28.468602 1678792 cri.go:89] found id: "9aebe2a3b6de103c7f8b9e2bd8f80a05a5befe8af91661090dd46344cae6c829"
	I1119 03:05:28.468622 1678792 cri.go:89] found id: "8aba6b7c7be445a2875873b755efd1399e985179e6a913cc3aefc480b738613c"
	I1119 03:05:28.468642 1678792 cri.go:89] found id: "a4b14efb5df254be991154d1dfd68e56342ac94b3a3a071d5cdf8aa75b5e2b0a"
	I1119 03:05:28.468673 1678792 cri.go:89] found id: "c47e30a501ed736547bbb4377e6df1e33a7226c1b2c94803f55b4e972ff18abd"
	I1119 03:05:28.468693 1678792 cri.go:89] found id: "1d586c7c3109fd5ba0aba02ff22f254bea2462e97b24f5d3f134dc24d068e0e6"
	I1119 03:05:28.468712 1678792 cri.go:89] found id: "bb7e2b0cb0cd02d62ac7ad2c37fe309260d9fcd24b72ccd2af687c7b1dcc6ec5"
	I1119 03:05:28.468729 1678792 cri.go:89] found id: "d72640f599edb0a7cc747d54663105ae5e186229c7ab646168a63821cf3e3666"
	I1119 03:05:28.468748 1678792 cri.go:89] found id: "1c5a8ad5bc6a5d13b6cef75a968c097e0e15feaca2933a332cc62792968879fc"
	I1119 03:05:28.468779 1678792 cri.go:89] found id: "1a78dba8bce368677aa036142ffa9608c3867766e29fb2e1011d917c5d6f239f"
	I1119 03:05:28.468804 1678792 cri.go:89] found id: "c867c5f07d95fb9e228a76ad97bd7ec2f39291ceef9462dfcb386be776ad518c"
	I1119 03:05:28.468822 1678792 cri.go:89] found id: ""
	I1119 03:05:28.468901 1678792 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 03:05:28.480458 1678792 retry.go:31] will retry after 272.062594ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:05:28Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:05:28.752926 1678792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:05:28.765832 1678792 pause.go:52] kubelet running: false
	I1119 03:05:28.765925 1678792 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 03:05:28.944525 1678792 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 03:05:28.944628 1678792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 03:05:29.025589 1678792 cri.go:89] found id: "5c44fc33f2c6f591d084800a048552c1fe51bc9a96b100574aab26d266ae2d23"
	I1119 03:05:29.025615 1678792 cri.go:89] found id: "9aebe2a3b6de103c7f8b9e2bd8f80a05a5befe8af91661090dd46344cae6c829"
	I1119 03:05:29.025621 1678792 cri.go:89] found id: "8aba6b7c7be445a2875873b755efd1399e985179e6a913cc3aefc480b738613c"
	I1119 03:05:29.025625 1678792 cri.go:89] found id: "a4b14efb5df254be991154d1dfd68e56342ac94b3a3a071d5cdf8aa75b5e2b0a"
	I1119 03:05:29.025629 1678792 cri.go:89] found id: "c47e30a501ed736547bbb4377e6df1e33a7226c1b2c94803f55b4e972ff18abd"
	I1119 03:05:29.025645 1678792 cri.go:89] found id: "1d586c7c3109fd5ba0aba02ff22f254bea2462e97b24f5d3f134dc24d068e0e6"
	I1119 03:05:29.025649 1678792 cri.go:89] found id: "bb7e2b0cb0cd02d62ac7ad2c37fe309260d9fcd24b72ccd2af687c7b1dcc6ec5"
	I1119 03:05:29.025652 1678792 cri.go:89] found id: "d72640f599edb0a7cc747d54663105ae5e186229c7ab646168a63821cf3e3666"
	I1119 03:05:29.025655 1678792 cri.go:89] found id: "1c5a8ad5bc6a5d13b6cef75a968c097e0e15feaca2933a332cc62792968879fc"
	I1119 03:05:29.025662 1678792 cri.go:89] found id: "1a78dba8bce368677aa036142ffa9608c3867766e29fb2e1011d917c5d6f239f"
	I1119 03:05:29.025666 1678792 cri.go:89] found id: "c867c5f07d95fb9e228a76ad97bd7ec2f39291ceef9462dfcb386be776ad518c"
	I1119 03:05:29.025670 1678792 cri.go:89] found id: ""
	I1119 03:05:29.025734 1678792 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 03:05:29.037342 1678792 retry.go:31] will retry after 485.693351ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:05:29Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:05:29.524198 1678792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:05:29.537549 1678792 pause.go:52] kubelet running: false
	I1119 03:05:29.537614 1678792 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 03:05:29.736752 1678792 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 03:05:29.736846 1678792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 03:05:29.804478 1678792 cri.go:89] found id: "5c44fc33f2c6f591d084800a048552c1fe51bc9a96b100574aab26d266ae2d23"
	I1119 03:05:29.804541 1678792 cri.go:89] found id: "9aebe2a3b6de103c7f8b9e2bd8f80a05a5befe8af91661090dd46344cae6c829"
	I1119 03:05:29.804560 1678792 cri.go:89] found id: "8aba6b7c7be445a2875873b755efd1399e985179e6a913cc3aefc480b738613c"
	I1119 03:05:29.804591 1678792 cri.go:89] found id: "a4b14efb5df254be991154d1dfd68e56342ac94b3a3a071d5cdf8aa75b5e2b0a"
	I1119 03:05:29.804620 1678792 cri.go:89] found id: "c47e30a501ed736547bbb4377e6df1e33a7226c1b2c94803f55b4e972ff18abd"
	I1119 03:05:29.804644 1678792 cri.go:89] found id: "1d586c7c3109fd5ba0aba02ff22f254bea2462e97b24f5d3f134dc24d068e0e6"
	I1119 03:05:29.804662 1678792 cri.go:89] found id: "bb7e2b0cb0cd02d62ac7ad2c37fe309260d9fcd24b72ccd2af687c7b1dcc6ec5"
	I1119 03:05:29.804681 1678792 cri.go:89] found id: "d72640f599edb0a7cc747d54663105ae5e186229c7ab646168a63821cf3e3666"
	I1119 03:05:29.804722 1678792 cri.go:89] found id: "1c5a8ad5bc6a5d13b6cef75a968c097e0e15feaca2933a332cc62792968879fc"
	I1119 03:05:29.804752 1678792 cri.go:89] found id: "1a78dba8bce368677aa036142ffa9608c3867766e29fb2e1011d917c5d6f239f"
	I1119 03:05:29.804768 1678792 cri.go:89] found id: "c867c5f07d95fb9e228a76ad97bd7ec2f39291ceef9462dfcb386be776ad518c"
	I1119 03:05:29.804784 1678792 cri.go:89] found id: ""
	I1119 03:05:29.804858 1678792 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 03:05:29.819928 1678792 out.go:203] 
	W1119 03:05:29.822904 1678792 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:05:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:05:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 03:05:29.822922 1678792 out.go:285] * 
	* 
	W1119 03:05:29.832456 1678792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 03:05:29.835333 1678792 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-800908 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-800908
helpers_test.go:243: (dbg) docker inspect no-preload-800908:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd",
	        "Created": "2025-11-19T03:02:36.622194348Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1676021,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T03:04:22.856247234Z",
	            "FinishedAt": "2025-11-19T03:04:21.83067211Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/hostname",
	        "HostsPath": "/var/lib/docker/containers/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/hosts",
	        "LogPath": "/var/lib/docker/containers/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd-json.log",
	        "Name": "/no-preload-800908",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-800908:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-800908",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd",
	                "LowerDir": "/var/lib/docker/overlay2/5f2a991abb1ac9e1d4f1b633bb11e2415ce1437a860a51427c5b7ab54fc65618-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f2a991abb1ac9e1d4f1b633bb11e2415ce1437a860a51427c5b7ab54fc65618/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f2a991abb1ac9e1d4f1b633bb11e2415ce1437a860a51427c5b7ab54fc65618/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f2a991abb1ac9e1d4f1b633bb11e2415ce1437a860a51427c5b7ab54fc65618/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-800908",
	                "Source": "/var/lib/docker/volumes/no-preload-800908/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-800908",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-800908",
	                "name.minikube.sigs.k8s.io": "no-preload-800908",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c99dc21eaad7e489de2f50de801d37f7251dc481120a18ed507d6cd7bf73eb01",
	            "SandboxKey": "/var/run/docker/netns/c99dc21eaad7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34945"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34946"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34949"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34947"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34948"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-800908": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:32:33:9b:d6:82",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2c1e146c03dfa36d5dc32c1606b9c05b9b637b68e1e65d533d701c41873db1eb",
	                    "EndpointID": "01780b93a59159822b7b9047c4fbf0064a597d1c0dcdf9e1a1796aa8de9e581b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-800908",
	                        "b531313c62c4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-800908 -n no-preload-800908
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-800908 -n no-preload-800908: exit status 2 (433.082253ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-800908 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-800908 logs -n 25: (1.355114227s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-579203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p disable-driver-mounts-722439                                                                                                                                                                                                               │ disable-driver-mounts-722439 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:03 UTC │
	│ image   │ embed-certs-592123 image list --format=json                                                                                                                                                                                                   │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p embed-certs-592123 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p embed-certs-592123                                                                                                                                                                                                                         │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p embed-certs-592123                                                                                                                                                                                                                         │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:03 UTC │
	│ addons  │ enable metrics-server -p newest-cni-886248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │                     │
	│ stop    │ -p newest-cni-886248 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:03 UTC │
	│ addons  │ enable dashboard -p newest-cni-886248 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:03 UTC │
	│ start   │ -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:04 UTC │
	│ image   │ newest-cni-886248 image list --format=json                                                                                                                                                                                                    │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:04 UTC │
	│ pause   │ -p newest-cni-886248 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-800908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │                     │
	│ delete  │ -p newest-cni-886248                                                                                                                                                                                                                          │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:04 UTC │
	│ delete  │ -p newest-cni-886248                                                                                                                                                                                                                          │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:04 UTC │
	│ stop    │ -p no-preload-800908 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:04 UTC │
	│ start   │ -p auto-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-889743                  │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-800908 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:04 UTC │
	│ start   │ -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:05 UTC │
	│ image   │ no-preload-800908 image list --format=json                                                                                                                                                                                                    │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:05 UTC │ 19 Nov 25 03:05 UTC │
	│ pause   │ -p no-preload-800908 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 03:04:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 03:04:22.477306 1675821 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:04:22.477529 1675821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:04:22.477557 1675821 out.go:374] Setting ErrFile to fd 2...
	I1119 03:04:22.477575 1675821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:04:22.477864 1675821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:04:22.478277 1675821 out.go:368] Setting JSON to false
	I1119 03:04:22.479264 1675821 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38790,"bootTime":1763482673,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 03:04:22.479357 1675821 start.go:143] virtualization:  
	I1119 03:04:22.482775 1675821 out.go:179] * [no-preload-800908] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 03:04:22.486923 1675821 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 03:04:22.486988 1675821 notify.go:221] Checking for updates...
	I1119 03:04:22.493916 1675821 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 03:04:22.496918 1675821 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:04:22.499739 1675821 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 03:04:22.502679 1675821 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 03:04:22.505662 1675821 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 03:04:22.509211 1675821 config.go:182] Loaded profile config "no-preload-800908": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:04:22.509833 1675821 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 03:04:22.547110 1675821 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 03:04:22.547225 1675821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:04:22.641936 1675821 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 03:04:22.629366043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:04:22.642046 1675821 docker.go:319] overlay module found
	I1119 03:04:22.645358 1675821 out.go:179] * Using the docker driver based on existing profile
	I1119 03:04:22.648203 1675821 start.go:309] selected driver: docker
	I1119 03:04:22.648219 1675821 start.go:930] validating driver "docker" against &{Name:no-preload-800908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:04:22.648326 1675821 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 03:04:22.648985 1675821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:04:22.738407 1675821 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 03:04:22.728328916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:04:22.738728 1675821 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:04:22.738770 1675821 cni.go:84] Creating CNI manager for ""
	I1119 03:04:22.738823 1675821 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:04:22.738870 1675821 start.go:353] cluster config:
	{Name:no-preload-800908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:04:22.744066 1675821 out.go:179] * Starting "no-preload-800908" primary control-plane node in "no-preload-800908" cluster
	I1119 03:04:22.747796 1675821 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 03:04:22.750846 1675821 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 03:04:22.753591 1675821 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:04:22.753649 1675821 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 03:04:22.753726 1675821 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/config.json ...
	I1119 03:04:22.754009 1675821 cache.go:107] acquiring lock: {Name:mkb58f30e5376d33040dfa777b3f8180ea85082b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754093 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1119 03:04:22.754107 1675821 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.158µs
	I1119 03:04:22.754115 1675821 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1119 03:04:22.754127 1675821 cache.go:107] acquiring lock: {Name:mk4427b1057ed3426220ced6aa14c26e167661f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754160 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1119 03:04:22.754170 1675821 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 44.298µs
	I1119 03:04:22.754176 1675821 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1119 03:04:22.754186 1675821 cache.go:107] acquiring lock: {Name:mke3a5e1f8219de1d6d968640b180760e94eaad4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754219 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1119 03:04:22.754229 1675821 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 43.674µs
	I1119 03:04:22.754235 1675821 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1119 03:04:22.754244 1675821 cache.go:107] acquiring lock: {Name:mkc90d3e387ee9423dce3105ec70e08f9a213a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754276 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1119 03:04:22.754286 1675821 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 42.387µs
	I1119 03:04:22.754292 1675821 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1119 03:04:22.754301 1675821 cache.go:107] acquiring lock: {Name:mk6ffbb0756aa279cf3ba05ddd5e5f7e66e5cbe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754333 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1119 03:04:22.754342 1675821 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 42.51µs
	I1119 03:04:22.754349 1675821 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1119 03:04:22.754358 1675821 cache.go:107] acquiring lock: {Name:mk88c3661a1e8c3438804e10f7c7d80646d19f18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754392 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1119 03:04:22.754401 1675821 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 44.487µs
	I1119 03:04:22.754407 1675821 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1119 03:04:22.754416 1675821 cache.go:107] acquiring lock: {Name:mk4358ffb1d662d66c4de9c14824434035268345 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754442 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1119 03:04:22.754448 1675821 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 32.213µs
	I1119 03:04:22.754453 1675821 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1119 03:04:22.754461 1675821 cache.go:107] acquiring lock: {Name:mk1d702ebd613a383e3fb22e99729e7baba0b90f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754486 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1119 03:04:22.754491 1675821 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 30.669µs
	I1119 03:04:22.754496 1675821 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1119 03:04:22.754501 1675821 cache.go:87] Successfully saved all images to host disk.
	I1119 03:04:22.774440 1675821 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 03:04:22.774465 1675821 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 03:04:22.774477 1675821 cache.go:243] Successfully downloaded all kic artifacts
	I1119 03:04:22.774503 1675821 start.go:360] acquireMachinesLock for no-preload-800908: {Name:mk6bdccc03286e3d7d2db959eee2861a6643234c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.774554 1675821 start.go:364] duration metric: took 32.967µs to acquireMachinesLock for "no-preload-800908"
	I1119 03:04:22.774579 1675821 start.go:96] Skipping create...Using existing machine configuration
	I1119 03:04:22.774584 1675821 fix.go:54] fixHost starting: 
	I1119 03:04:22.774844 1675821 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:04:22.805965 1675821 fix.go:112] recreateIfNeeded on no-preload-800908: state=Stopped err=<nil>
	W1119 03:04:22.805992 1675821 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 03:04:20.790362 1673805 cli_runner.go:164] Run: docker network inspect auto-889743 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:04:20.806078 1673805 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 03:04:20.809828 1673805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:04:20.818965 1673805 kubeadm.go:884] updating cluster {Name:auto-889743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-889743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 03:04:20.819088 1673805 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:04:20.819150 1673805 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:04:20.850420 1673805 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:04:20.850440 1673805 crio.go:433] Images already preloaded, skipping extraction
	I1119 03:04:20.850493 1673805 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:04:20.875573 1673805 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:04:20.875635 1673805 cache_images.go:86] Images are preloaded, skipping loading
	I1119 03:04:20.875650 1673805 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 03:04:20.875749 1673805 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-889743 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-889743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 03:04:20.875830 1673805 ssh_runner.go:195] Run: crio config
	I1119 03:04:20.930247 1673805 cni.go:84] Creating CNI manager for ""
	I1119 03:04:20.930381 1673805 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:04:20.930415 1673805 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 03:04:20.930468 1673805 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-889743 NodeName:auto-889743 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 03:04:20.930626 1673805 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-889743"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 03:04:20.930700 1673805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 03:04:20.938283 1673805 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 03:04:20.938397 1673805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 03:04:20.945577 1673805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1119 03:04:20.957321 1673805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 03:04:20.970932 1673805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1119 03:04:20.982970 1673805 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 03:04:20.986301 1673805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:04:20.995387 1673805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:04:21.118536 1673805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:04:21.134578 1673805 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743 for IP: 192.168.76.2
	I1119 03:04:21.134600 1673805 certs.go:195] generating shared ca certs ...
	I1119 03:04:21.134616 1673805 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:21.134810 1673805 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 03:04:21.134873 1673805 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 03:04:21.134886 1673805 certs.go:257] generating profile certs ...
	I1119 03:04:21.134970 1673805 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.key
	I1119 03:04:21.134988 1673805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt with IP's: []
	I1119 03:04:21.647379 1673805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt ...
	I1119 03:04:21.647411 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: {Name:mk4968452fc6432ffbcd75e560a0b055d12d547d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:21.647641 1673805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.key ...
	I1119 03:04:21.647657 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.key: {Name:mk69be173031f6e237aa979eb41f1c630569af27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:21.647763 1673805 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.key.9a6120e7
	I1119 03:04:21.647784 1673805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.crt.9a6120e7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 03:04:21.919533 1673805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.crt.9a6120e7 ...
	I1119 03:04:21.919564 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.crt.9a6120e7: {Name:mk4f448b899e5cfca9dab4b079bc5adff866e432 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:21.919758 1673805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.key.9a6120e7 ...
	I1119 03:04:21.919777 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.key.9a6120e7: {Name:mk82626b592048e91a002fae92b142b076e3f304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:21.919862 1673805 certs.go:382] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.crt.9a6120e7 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.crt
	I1119 03:04:21.919948 1673805 certs.go:386] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.key.9a6120e7 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.key
	I1119 03:04:21.920012 1673805 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.key
	I1119 03:04:21.920031 1673805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.crt with IP's: []
	I1119 03:04:22.295569 1673805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.crt ...
	I1119 03:04:22.295601 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.crt: {Name:mk95e166548be8785a0f5cf96868a38df8721371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:22.295810 1673805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.key ...
	I1119 03:04:22.295826 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.key: {Name:mk9c88e156ce0018059ec1432dda8ee584f6a5e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:22.296046 1673805 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 03:04:22.296101 1673805 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 03:04:22.296117 1673805 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 03:04:22.296158 1673805 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 03:04:22.296197 1673805 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 03:04:22.296220 1673805 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 03:04:22.296281 1673805 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:04:22.296919 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 03:04:22.314116 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 03:04:22.334054 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 03:04:22.354574 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 03:04:22.372525 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1119 03:04:22.391335 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 03:04:22.413425 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 03:04:22.431458 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 03:04:22.450243 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 03:04:22.468733 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 03:04:22.488719 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 03:04:22.512179 1673805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 03:04:22.530969 1673805 ssh_runner.go:195] Run: openssl version
	I1119 03:04:22.541925 1673805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 03:04:22.551297 1673805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:04:22.555425 1673805 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:04:22.555488 1673805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:04:22.598526 1673805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 03:04:22.606786 1673805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 03:04:22.614840 1673805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 03:04:22.619090 1673805 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 03:04:22.619149 1673805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 03:04:22.663889 1673805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 03:04:22.672790 1673805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 03:04:22.686617 1673805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 03:04:22.690340 1673805 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 03:04:22.690408 1673805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 03:04:22.734822 1673805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 03:04:22.743873 1673805 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 03:04:22.747510 1673805 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 03:04:22.747566 1673805 kubeadm.go:401] StartCluster: {Name:auto-889743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-889743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:04:22.747639 1673805 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 03:04:22.747698 1673805 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 03:04:22.793717 1673805 cri.go:89] found id: ""
	I1119 03:04:22.793798 1673805 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 03:04:22.820371 1673805 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 03:04:22.842976 1673805 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 03:04:22.843041 1673805 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 03:04:22.855895 1673805 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 03:04:22.855915 1673805 kubeadm.go:158] found existing configuration files:
	
	I1119 03:04:22.855977 1673805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 03:04:22.867310 1673805 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 03:04:22.867382 1673805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 03:04:22.875322 1673805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 03:04:22.896052 1673805 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 03:04:22.896119 1673805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 03:04:22.906586 1673805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 03:04:22.915811 1673805 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 03:04:22.915871 1673805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 03:04:22.938553 1673805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 03:04:22.951671 1673805 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 03:04:22.951786 1673805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 03:04:22.960341 1673805 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 03:04:23.020295 1673805 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 03:04:23.020436 1673805 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 03:04:23.049092 1673805 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 03:04:23.049184 1673805 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 03:04:23.049227 1673805 kubeadm.go:319] OS: Linux
	I1119 03:04:23.049285 1673805 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 03:04:23.049340 1673805 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 03:04:23.049396 1673805 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 03:04:23.049469 1673805 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 03:04:23.049624 1673805 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 03:04:23.049721 1673805 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 03:04:23.049807 1673805 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 03:04:23.049904 1673805 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 03:04:23.049963 1673805 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 03:04:23.166268 1673805 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 03:04:23.166383 1673805 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 03:04:23.166489 1673805 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 03:04:23.192873 1673805 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 03:04:23.198323 1673805 out.go:252]   - Generating certificates and keys ...
	I1119 03:04:23.198434 1673805 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 03:04:23.198511 1673805 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 03:04:23.877601 1673805 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 03:04:24.375965 1673805 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 03:04:22.811158 1675821 out.go:252] * Restarting existing docker container for "no-preload-800908" ...
	I1119 03:04:22.811244 1675821 cli_runner.go:164] Run: docker start no-preload-800908
	I1119 03:04:23.095393 1675821 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:04:23.133120 1675821 kic.go:430] container "no-preload-800908" state is running.
	I1119 03:04:23.133500 1675821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-800908
	I1119 03:04:23.157043 1675821 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/config.json ...
	I1119 03:04:23.157256 1675821 machine.go:94] provisionDockerMachine start ...
	I1119 03:04:23.157323 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:23.184664 1675821 main.go:143] libmachine: Using SSH client type: native
	I1119 03:04:23.184987 1675821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34945 <nil> <nil>}
	I1119 03:04:23.185006 1675821 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 03:04:23.185706 1675821 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 03:04:26.338727 1675821 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-800908
	
	I1119 03:04:26.338817 1675821 ubuntu.go:182] provisioning hostname "no-preload-800908"
	I1119 03:04:26.338934 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:26.367888 1675821 main.go:143] libmachine: Using SSH client type: native
	I1119 03:04:26.368255 1675821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34945 <nil> <nil>}
	I1119 03:04:26.368268 1675821 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-800908 && echo "no-preload-800908" | sudo tee /etc/hostname
	I1119 03:04:26.537478 1675821 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-800908
	
	I1119 03:04:26.537609 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:26.565776 1675821 main.go:143] libmachine: Using SSH client type: native
	I1119 03:04:26.566097 1675821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34945 <nil> <nil>}
	I1119 03:04:26.566120 1675821 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-800908' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-800908/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-800908' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 03:04:26.729876 1675821 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 03:04:26.729903 1675821 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 03:04:26.729932 1675821 ubuntu.go:190] setting up certificates
	I1119 03:04:26.729950 1675821 provision.go:84] configureAuth start
	I1119 03:04:26.730012 1675821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-800908
	I1119 03:04:26.751917 1675821 provision.go:143] copyHostCerts
	I1119 03:04:26.751985 1675821 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 03:04:26.752005 1675821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 03:04:26.752090 1675821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 03:04:26.752197 1675821 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 03:04:26.752208 1675821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 03:04:26.752235 1675821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 03:04:26.752295 1675821 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 03:04:26.752304 1675821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 03:04:26.752327 1675821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 03:04:26.752379 1675821 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.no-preload-800908 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-800908]
	I1119 03:04:27.066523 1675821 provision.go:177] copyRemoteCerts
	I1119 03:04:27.066595 1675821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 03:04:27.066639 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:27.084521 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:27.185910 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 03:04:27.206304 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 03:04:27.225483 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 03:04:27.245359 1675821 provision.go:87] duration metric: took 515.38313ms to configureAuth
	I1119 03:04:27.245386 1675821 ubuntu.go:206] setting minikube options for container-runtime
	I1119 03:04:27.245648 1675821 config.go:182] Loaded profile config "no-preload-800908": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:04:27.245751 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:27.269715 1675821 main.go:143] libmachine: Using SSH client type: native
	I1119 03:04:27.270086 1675821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34945 <nil> <nil>}
	I1119 03:04:27.270105 1675821 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 03:04:27.702511 1675821 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 03:04:27.702533 1675821 machine.go:97] duration metric: took 4.545264555s to provisionDockerMachine
	I1119 03:04:27.702558 1675821 start.go:293] postStartSetup for "no-preload-800908" (driver="docker")
	I1119 03:04:27.702574 1675821 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 03:04:27.702639 1675821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 03:04:27.702676 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:27.727081 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:27.834258 1675821 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 03:04:27.838159 1675821 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 03:04:27.838184 1675821 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 03:04:27.838200 1675821 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 03:04:27.838251 1675821 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 03:04:27.838330 1675821 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 03:04:27.838431 1675821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 03:04:27.846432 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:04:27.865426 1675821 start.go:296] duration metric: took 162.852151ms for postStartSetup
	I1119 03:04:27.865639 1675821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 03:04:27.865713 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:27.887086 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:27.986576 1675821 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 03:04:27.991531 1675821 fix.go:56] duration metric: took 5.216938581s for fixHost
	I1119 03:04:27.991558 1675821 start.go:83] releasing machines lock for "no-preload-800908", held for 5.216989698s
	I1119 03:04:27.991622 1675821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-800908
	I1119 03:04:28.010559 1675821 ssh_runner.go:195] Run: cat /version.json
	I1119 03:04:28.010611 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:28.010631 1675821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 03:04:28.010703 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:28.045903 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:28.061487 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:28.165449 1675821 ssh_runner.go:195] Run: systemctl --version
	I1119 03:04:28.268864 1675821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 03:04:28.310548 1675821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 03:04:28.315751 1675821 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 03:04:28.315842 1675821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 03:04:28.324285 1675821 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 03:04:28.324322 1675821 start.go:496] detecting cgroup driver to use...
	I1119 03:04:28.324352 1675821 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 03:04:28.324408 1675821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 03:04:28.339759 1675821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 03:04:28.353545 1675821 docker.go:218] disabling cri-docker service (if available) ...
	I1119 03:04:28.353617 1675821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 03:04:28.369803 1675821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 03:04:28.383942 1675821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 03:04:28.529444 1675821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 03:04:28.672412 1675821 docker.go:234] disabling docker service ...
	I1119 03:04:28.672484 1675821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 03:04:28.687776 1675821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 03:04:28.702110 1675821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 03:04:28.861541 1675821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 03:04:29.019918 1675821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 03:04:29.034606 1675821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 03:04:29.048887 1675821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 03:04:29.048952 1675821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.057944 1675821 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 03:04:29.058030 1675821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.067225 1675821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.076371 1675821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.085632 1675821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 03:04:29.094458 1675821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.103631 1675821 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.112386 1675821 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.121448 1675821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 03:04:29.129853 1675821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 03:04:29.137955 1675821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:04:29.277615 1675821 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 03:04:29.475839 1675821 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 03:04:29.475926 1675821 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 03:04:29.480424 1675821 start.go:564] Will wait 60s for crictl version
	I1119 03:04:29.480496 1675821 ssh_runner.go:195] Run: which crictl
	I1119 03:04:29.484215 1675821 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 03:04:29.527833 1675821 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 03:04:29.527928 1675821 ssh_runner.go:195] Run: crio --version
	I1119 03:04:29.591343 1675821 ssh_runner.go:195] Run: crio --version
	I1119 03:04:29.632174 1675821 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 03:04:25.602531 1673805 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 03:04:26.169128 1673805 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 03:04:28.576219 1673805 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 03:04:28.576482 1673805 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-889743 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 03:04:29.634933 1675821 cli_runner.go:164] Run: docker network inspect no-preload-800908 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:04:29.651400 1675821 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 03:04:29.655911 1675821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:04:29.671952 1675821 kubeadm.go:884] updating cluster {Name:no-preload-800908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 03:04:29.672071 1675821 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:04:29.672121 1675821 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:04:29.715290 1675821 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:04:29.715318 1675821 cache_images.go:86] Images are preloaded, skipping loading
	I1119 03:04:29.715326 1675821 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 03:04:29.715426 1675821 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-800908 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 03:04:29.715508 1675821 ssh_runner.go:195] Run: crio config
	I1119 03:04:29.796980 1675821 cni.go:84] Creating CNI manager for ""
	I1119 03:04:29.797013 1675821 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:04:29.797038 1675821 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 03:04:29.797067 1675821 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-800908 NodeName:no-preload-800908 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 03:04:29.797197 1675821 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-800908"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 03:04:29.797272 1675821 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 03:04:29.812171 1675821 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 03:04:29.812253 1675821 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 03:04:29.820591 1675821 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 03:04:29.835019 1675821 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 03:04:29.849614 1675821 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 03:04:29.864522 1675821 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 03:04:29.868733 1675821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:04:29.879176 1675821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:04:30.034156 1675821 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:04:30.054377 1675821 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908 for IP: 192.168.85.2
	I1119 03:04:30.054415 1675821 certs.go:195] generating shared ca certs ...
	I1119 03:04:30.054432 1675821 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:30.054656 1675821 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 03:04:30.054721 1675821 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 03:04:30.054736 1675821 certs.go:257] generating profile certs ...
	I1119 03:04:30.054862 1675821 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.key
	I1119 03:04:30.054962 1675821 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.key.a073045a
	I1119 03:04:30.055009 1675821 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.key
	I1119 03:04:30.055157 1675821 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 03:04:30.055203 1675821 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 03:04:30.055218 1675821 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 03:04:30.055244 1675821 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 03:04:30.055279 1675821 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 03:04:30.055307 1675821 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 03:04:30.055365 1675821 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:04:30.056216 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 03:04:30.151786 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 03:04:30.175482 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 03:04:30.211097 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 03:04:30.272244 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 03:04:30.320185 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 03:04:30.382620 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 03:04:30.413991 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 03:04:30.437379 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 03:04:30.461365 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 03:04:30.480940 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 03:04:30.511513 1675821 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 03:04:30.526389 1675821 ssh_runner.go:195] Run: openssl version
	I1119 03:04:30.532877 1675821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 03:04:30.543953 1675821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 03:04:30.547918 1675821 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 03:04:30.547997 1675821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 03:04:30.589956 1675821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 03:04:30.597875 1675821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 03:04:30.605739 1675821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:04:30.609933 1675821 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:04:30.610007 1675821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:04:30.651169 1675821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 03:04:30.659132 1675821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 03:04:30.667464 1675821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 03:04:30.671601 1675821 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 03:04:30.671675 1675821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 03:04:30.714054 1675821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 03:04:30.722578 1675821 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 03:04:30.726771 1675821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 03:04:30.767728 1675821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 03:04:30.811652 1675821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 03:04:30.888231 1675821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 03:04:30.979272 1675821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 03:04:31.056819 1675821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 03:04:31.134247 1675821 kubeadm.go:401] StartCluster: {Name:no-preload-800908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:04:31.134357 1675821 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 03:04:31.134430 1675821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 03:04:31.196447 1675821 cri.go:89] found id: "bb7e2b0cb0cd02d62ac7ad2c37fe309260d9fcd24b72ccd2af687c7b1dcc6ec5"
	I1119 03:04:31.196475 1675821 cri.go:89] found id: "d72640f599edb0a7cc747d54663105ae5e186229c7ab646168a63821cf3e3666"
	I1119 03:04:31.196480 1675821 cri.go:89] found id: "1c5a8ad5bc6a5d13b6cef75a968c097e0e15feaca2933a332cc62792968879fc"
	I1119 03:04:31.196484 1675821 cri.go:89] found id: ""
	I1119 03:04:31.196541 1675821 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 03:04:31.226724 1675821 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:04:31Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:04:31.226825 1675821 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 03:04:31.255209 1675821 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 03:04:31.255229 1675821 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 03:04:31.255293 1675821 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 03:04:31.283443 1675821 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 03:04:31.283883 1675821 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-800908" does not appear in /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:04:31.284005 1675821 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-1463525/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-800908" cluster setting kubeconfig missing "no-preload-800908" context setting]
	I1119 03:04:31.284313 1675821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:31.287138 1675821 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 03:04:31.321259 1675821 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 03:04:31.321339 1675821 kubeadm.go:602] duration metric: took 66.102195ms to restartPrimaryControlPlane
	I1119 03:04:31.321363 1675821 kubeadm.go:403] duration metric: took 187.125055ms to StartCluster
	I1119 03:04:31.321407 1675821 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:31.321485 1675821 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:04:31.322170 1675821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:31.322442 1675821 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:04:31.322571 1675821 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:04:31.322902 1675821 addons.go:70] Setting storage-provisioner=true in profile "no-preload-800908"
	I1119 03:04:31.322931 1675821 addons.go:239] Setting addon storage-provisioner=true in "no-preload-800908"
	W1119 03:04:31.323032 1675821 addons.go:248] addon storage-provisioner should already be in state true
	I1119 03:04:31.323069 1675821 host.go:66] Checking if "no-preload-800908" exists ...
	I1119 03:04:31.323164 1675821 addons.go:70] Setting dashboard=true in profile "no-preload-800908"
	I1119 03:04:31.323192 1675821 addons.go:239] Setting addon dashboard=true in "no-preload-800908"
	W1119 03:04:31.323210 1675821 addons.go:248] addon dashboard should already be in state true
	I1119 03:04:31.323238 1675821 host.go:66] Checking if "no-preload-800908" exists ...
	I1119 03:04:31.323719 1675821 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:04:31.323786 1675821 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:04:31.322743 1675821 config.go:182] Loaded profile config "no-preload-800908": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:04:31.324299 1675821 addons.go:70] Setting default-storageclass=true in profile "no-preload-800908"
	I1119 03:04:31.324314 1675821 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-800908"
	I1119 03:04:31.324558 1675821 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:04:31.330533 1675821 out.go:179] * Verifying Kubernetes components...
	I1119 03:04:31.337712 1675821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:04:31.389558 1675821 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 03:04:31.392866 1675821 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:04:31.395856 1675821 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:04:31.395877 1675821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:04:31.395939 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:31.396070 1675821 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 03:04:31.398432 1675821 addons.go:239] Setting addon default-storageclass=true in "no-preload-800908"
	W1119 03:04:31.398459 1675821 addons.go:248] addon default-storageclass should already be in state true
	I1119 03:04:31.399396 1675821 host.go:66] Checking if "no-preload-800908" exists ...
	I1119 03:04:31.399470 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 03:04:31.399486 1675821 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 03:04:31.399554 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:31.399906 1675821 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:04:31.435951 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:31.454749 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:31.460603 1675821 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:04:31.460628 1675821 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:04:31.460691 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:31.493677 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:31.802265 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 03:04:31.802357 1675821 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 03:04:31.852210 1675821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:04:31.899496 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 03:04:31.899567 1675821 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 03:04:31.907969 1675821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:04:31.930142 1675821 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:04:31.994633 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 03:04:31.994716 1675821 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 03:04:32.117072 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 03:04:32.117144 1675821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 03:04:32.214963 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 03:04:32.215039 1675821 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 03:04:32.314289 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 03:04:32.314363 1675821 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 03:04:32.381856 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 03:04:32.381929 1675821 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 03:04:32.442578 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 03:04:32.442654 1675821 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 03:04:32.475472 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:04:32.475543 1675821 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 03:04:30.906475 1673805 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 03:04:30.907259 1673805 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-889743 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 03:04:31.524970 1673805 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 03:04:31.977880 1673805 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 03:04:33.261079 1673805 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 03:04:33.261659 1673805 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 03:04:33.988599 1673805 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 03:04:34.308962 1673805 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 03:04:34.842579 1673805 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 03:04:35.661873 1673805 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 03:04:36.322186 1673805 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 03:04:36.323316 1673805 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 03:04:36.326239 1673805 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 03:04:32.518133 1675821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:04:36.329571 1673805 out.go:252]   - Booting up control plane ...
	I1119 03:04:36.329677 1673805 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 03:04:36.333872 1673805 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 03:04:36.335689 1673805 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 03:04:36.380991 1673805 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 03:04:36.381105 1673805 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 03:04:36.395764 1673805 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 03:04:36.395869 1673805 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 03:04:36.395912 1673805 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 03:04:36.658923 1673805 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 03:04:36.659052 1673805 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 03:04:37.661858 1673805 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000807847s
	I1119 03:04:37.663415 1673805 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 03:04:37.663769 1673805 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 03:04:37.664600 1673805 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 03:04:37.665161 1673805 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 03:04:41.678115 1675821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.825826031s)
	I1119 03:04:41.678170 1675821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.770141712s)
	I1119 03:04:41.678516 1675821 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.748300738s)
	I1119 03:04:41.678553 1675821 node_ready.go:35] waiting up to 6m0s for node "no-preload-800908" to be "Ready" ...
	I1119 03:04:41.835784 1675821 node_ready.go:49] node "no-preload-800908" is "Ready"
	I1119 03:04:41.835832 1675821 node_ready.go:38] duration metric: took 157.251982ms for node "no-preload-800908" to be "Ready" ...
	I1119 03:04:41.835847 1675821 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:04:41.835921 1675821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:04:42.378875 1675821 api_server.go:72] duration metric: took 11.056088729s to wait for apiserver process to appear ...
	I1119 03:04:42.378908 1675821 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:04:42.378932 1675821 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 03:04:42.379459 1675821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.861231423s)
	I1119 03:04:42.382679 1675821 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-800908 addons enable metrics-server
	
	I1119 03:04:42.385799 1675821 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1119 03:04:42.388899 1675821 addons.go:515] duration metric: took 11.066255609s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1119 03:04:42.407169 1675821 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 03:04:42.407205 1675821 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 03:04:44.288282 1673805 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.623215896s
	I1119 03:04:45.852812 1673805 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.187192755s
	I1119 03:04:46.670007 1673805 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.00579359s
	I1119 03:04:46.700029 1673805 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 03:04:46.719080 1673805 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 03:04:46.738019 1673805 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 03:04:46.738498 1673805 kubeadm.go:319] [mark-control-plane] Marking the node auto-889743 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 03:04:46.753079 1673805 kubeadm.go:319] [bootstrap-token] Using token: izm2gu.you734hy063zwcav
	I1119 03:04:42.879884 1675821 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 03:04:42.892510 1675821 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 03:04:42.893980 1675821 api_server.go:141] control plane version: v1.34.1
	I1119 03:04:42.894008 1675821 api_server.go:131] duration metric: took 515.092166ms to wait for apiserver health ...
	I1119 03:04:42.894018 1675821 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:04:42.928361 1675821 system_pods.go:59] 8 kube-system pods found
	I1119 03:04:42.928417 1675821 system_pods.go:61] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:04:42.928428 1675821 system_pods.go:61] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:04:42.928435 1675821 system_pods.go:61] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:04:42.928443 1675821 system_pods.go:61] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 03:04:42.928449 1675821 system_pods.go:61] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:04:42.928455 1675821 system_pods.go:61] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:04:42.928462 1675821 system_pods.go:61] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:04:42.928468 1675821 system_pods.go:61] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:04:42.928481 1675821 system_pods.go:74] duration metric: took 34.458088ms to wait for pod list to return data ...
	I1119 03:04:42.928491 1675821 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:04:42.936921 1675821 default_sa.go:45] found service account: "default"
	I1119 03:04:42.936954 1675821 default_sa.go:55] duration metric: took 8.45127ms for default service account to be created ...
	I1119 03:04:42.936964 1675821 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 03:04:43.013602 1675821 system_pods.go:86] 8 kube-system pods found
	I1119 03:04:43.013671 1675821 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:04:43.013683 1675821 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:04:43.013698 1675821 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:04:43.013706 1675821 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 03:04:43.013716 1675821 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:04:43.013721 1675821 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:04:43.013735 1675821 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:04:43.013750 1675821 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Running
	I1119 03:04:43.013758 1675821 system_pods.go:126] duration metric: took 76.78764ms to wait for k8s-apps to be running ...
	I1119 03:04:43.013768 1675821 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 03:04:43.013835 1675821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:04:43.036006 1675821 system_svc.go:56] duration metric: took 22.227022ms WaitForService to wait for kubelet
	I1119 03:04:43.036037 1675821 kubeadm.go:587] duration metric: took 11.713270342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:04:43.036055 1675821 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:04:43.039043 1675821 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:04:43.039077 1675821 node_conditions.go:123] node cpu capacity is 2
	I1119 03:04:43.039089 1675821 node_conditions.go:105] duration metric: took 3.028106ms to run NodePressure ...
	I1119 03:04:43.039110 1675821 start.go:242] waiting for startup goroutines ...
	I1119 03:04:43.039122 1675821 start.go:247] waiting for cluster config update ...
	I1119 03:04:43.039134 1675821 start.go:256] writing updated cluster config ...
	I1119 03:04:43.039456 1675821 ssh_runner.go:195] Run: rm -f paused
	I1119 03:04:43.046737 1675821 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:04:43.051725 1675821 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5gb8d" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 03:04:45.062364 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	I1119 03:04:46.755582 1673805 out.go:252]   - Configuring RBAC rules ...
	I1119 03:04:46.755698 1673805 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 03:04:46.762541 1673805 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 03:04:46.771841 1673805 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 03:04:46.778006 1673805 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 03:04:46.784858 1673805 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 03:04:46.794738 1673805 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 03:04:47.080924 1673805 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 03:04:47.598722 1673805 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 03:04:48.077970 1673805 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 03:04:48.079243 1673805 kubeadm.go:319] 
	I1119 03:04:48.079326 1673805 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 03:04:48.079333 1673805 kubeadm.go:319] 
	I1119 03:04:48.079409 1673805 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 03:04:48.079414 1673805 kubeadm.go:319] 
	I1119 03:04:48.079439 1673805 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 03:04:48.079498 1673805 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 03:04:48.079556 1673805 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 03:04:48.079561 1673805 kubeadm.go:319] 
	I1119 03:04:48.079614 1673805 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 03:04:48.079619 1673805 kubeadm.go:319] 
	I1119 03:04:48.079667 1673805 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 03:04:48.079672 1673805 kubeadm.go:319] 
	I1119 03:04:48.079723 1673805 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 03:04:48.079800 1673805 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 03:04:48.079867 1673805 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 03:04:48.079872 1673805 kubeadm.go:319] 
	I1119 03:04:48.079956 1673805 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 03:04:48.080032 1673805 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 03:04:48.080036 1673805 kubeadm.go:319] 
	I1119 03:04:48.080119 1673805 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token izm2gu.you734hy063zwcav \
	I1119 03:04:48.080222 1673805 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a \
	I1119 03:04:48.080256 1673805 kubeadm.go:319] 	--control-plane 
	I1119 03:04:48.080261 1673805 kubeadm.go:319] 
	I1119 03:04:48.080345 1673805 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 03:04:48.080349 1673805 kubeadm.go:319] 
	I1119 03:04:48.080430 1673805 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token izm2gu.you734hy063zwcav \
	I1119 03:04:48.080532 1673805 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a 
	I1119 03:04:48.092947 1673805 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 03:04:48.093183 1673805 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 03:04:48.093288 1673805 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 03:04:48.093318 1673805 cni.go:84] Creating CNI manager for ""
	I1119 03:04:48.093326 1673805 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:04:48.096518 1673805 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 03:04:48.100447 1673805 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 03:04:48.108052 1673805 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 03:04:48.108070 1673805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 03:04:48.149152 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 03:04:48.613068 1673805 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 03:04:48.613195 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:48.613274 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-889743 minikube.k8s.io/updated_at=2025_11_19T03_04_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=auto-889743 minikube.k8s.io/primary=true
	I1119 03:04:49.091212 1673805 ops.go:34] apiserver oom_adj: -16
	I1119 03:04:49.091311 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:49.591927 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1119 03:04:47.561526 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:04:50.058398 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	I1119 03:04:50.091721 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:50.591435 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:51.091983 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:51.591428 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:52.092238 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:52.591857 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:52.817217 1673805 kubeadm.go:1114] duration metric: took 4.20406345s to wait for elevateKubeSystemPrivileges
	I1119 03:04:52.817244 1673805 kubeadm.go:403] duration metric: took 30.069681666s to StartCluster
	I1119 03:04:52.817261 1673805 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:52.817323 1673805 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:04:52.818369 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:52.818560 1673805 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:04:52.818640 1673805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 03:04:52.818909 1673805 config.go:182] Loaded profile config "auto-889743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:04:52.819027 1673805 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:04:52.819087 1673805 addons.go:70] Setting storage-provisioner=true in profile "auto-889743"
	I1119 03:04:52.819101 1673805 addons.go:239] Setting addon storage-provisioner=true in "auto-889743"
	I1119 03:04:52.819124 1673805 host.go:66] Checking if "auto-889743" exists ...
	I1119 03:04:52.819584 1673805 cli_runner.go:164] Run: docker container inspect auto-889743 --format={{.State.Status}}
	I1119 03:04:52.820895 1673805 addons.go:70] Setting default-storageclass=true in profile "auto-889743"
	I1119 03:04:52.820926 1673805 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-889743"
	I1119 03:04:52.821268 1673805 cli_runner.go:164] Run: docker container inspect auto-889743 --format={{.State.Status}}
	I1119 03:04:52.826792 1673805 out.go:179] * Verifying Kubernetes components...
	I1119 03:04:52.832349 1673805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:04:52.862574 1673805 addons.go:239] Setting addon default-storageclass=true in "auto-889743"
	I1119 03:04:52.862613 1673805 host.go:66] Checking if "auto-889743" exists ...
	I1119 03:04:52.863169 1673805 cli_runner.go:164] Run: docker container inspect auto-889743 --format={{.State.Status}}
	I1119 03:04:52.873117 1673805 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:04:52.878366 1673805 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:04:52.878389 1673805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:04:52.878472 1673805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-889743
	I1119 03:04:52.919244 1673805 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:04:52.919265 1673805 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:04:52.919329 1673805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-889743
	I1119 03:04:52.941992 1673805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34940 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/auto-889743/id_rsa Username:docker}
	I1119 03:04:52.966943 1673805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34940 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/auto-889743/id_rsa Username:docker}
	I1119 03:04:53.377494 1673805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:04:53.545109 1673805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 03:04:53.545234 1673805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:04:53.573334 1673805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:04:54.677924 1673805 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.132657524s)
	I1119 03:04:54.678786 1673805 node_ready.go:35] waiting up to 15m0s for node "auto-889743" to be "Ready" ...
	I1119 03:04:54.678999 1673805 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.133863421s)
	I1119 03:04:54.679020 1673805 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 03:04:54.995063 1673805 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.421692516s)
	I1119 03:04:54.998241 1673805 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 03:04:55.001338 1673805 addons.go:515] duration metric: took 2.182306459s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1119 03:04:52.560238 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:04:54.560900 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:04:56.562003 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	I1119 03:04:55.184834 1673805 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-889743" context rescaled to 1 replicas
	W1119 03:04:56.682153 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:04:59.182341 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:04:59.057801 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:05:01.557205 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:05:01.683505 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:04.182653 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:04.057424 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:05:06.058179 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:05:06.681991 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:08.682220 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:08.557071 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:05:10.557736 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:05:11.182556 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:13.182744 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:12.558512 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	I1119 03:05:14.058423 1675821 pod_ready.go:94] pod "coredns-66bc5c9577-5gb8d" is "Ready"
	I1119 03:05:14.058451 1675821 pod_ready.go:86] duration metric: took 31.006688461s for pod "coredns-66bc5c9577-5gb8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.061388 1675821 pod_ready.go:83] waiting for pod "etcd-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.067018 1675821 pod_ready.go:94] pod "etcd-no-preload-800908" is "Ready"
	I1119 03:05:14.067047 1675821 pod_ready.go:86] duration metric: took 5.632347ms for pod "etcd-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.069905 1675821 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.075237 1675821 pod_ready.go:94] pod "kube-apiserver-no-preload-800908" is "Ready"
	I1119 03:05:14.075261 1675821 pod_ready.go:86] duration metric: took 5.325377ms for pod "kube-apiserver-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.077812 1675821 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.256574 1675821 pod_ready.go:94] pod "kube-controller-manager-no-preload-800908" is "Ready"
	I1119 03:05:14.256651 1675821 pod_ready.go:86] duration metric: took 178.810837ms for pod "kube-controller-manager-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.456820 1675821 pod_ready.go:83] waiting for pod "kube-proxy-59bnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.855845 1675821 pod_ready.go:94] pod "kube-proxy-59bnq" is "Ready"
	I1119 03:05:14.855872 1675821 pod_ready.go:86] duration metric: took 399.023268ms for pod "kube-proxy-59bnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:15.056311 1675821 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:15.456176 1675821 pod_ready.go:94] pod "kube-scheduler-no-preload-800908" is "Ready"
	I1119 03:05:15.456206 1675821 pod_ready.go:86] duration metric: took 399.869653ms for pod "kube-scheduler-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:15.456220 1675821 pod_ready.go:40] duration metric: took 32.409431473s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:05:15.511262 1675821 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:05:15.516424 1675821 out.go:179] * Done! kubectl is now configured to use "no-preload-800908" cluster and "default" namespace by default
	W1119 03:05:15.681473 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:17.682390 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:20.182528 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:22.682264 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:25.182302 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:27.182430 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:29.682826 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 19 03:05:09 no-preload-800908 crio[661]: time="2025-11-19T03:05:09.9309466Z" level=info msg="Removed container 3080995646c4c1115cc00e958b65b6bf12f7dc431aa8b6c75b84f491b2ed1c0c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2/dashboard-metrics-scraper" id=82c59f63-c4ed-4474-aa12-fdec436a56a8 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 03:05:12 no-preload-800908 conmon[1138]: conmon a4b14efb5df254be9911 <ninfo>: container 1158 exited with status 1
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.927921345Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d6944dd3-f9c1-420a-9165-10495c0624d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.928882376Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=482ee2fd-802f-4623-b96b-c450946493b0 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.929828334Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0f514a3d-0e5c-4065-84ca-1f223d28fd70 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.929948716Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.937781132Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.937948643Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1e07bacc1c5d6ccf720bf8db4005c7cc4fffca98a3ecf22af83c604e7c7ee1fa/merged/etc/passwd: no such file or directory"
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.937970583Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1e07bacc1c5d6ccf720bf8db4005c7cc4fffca98a3ecf22af83c604e7c7ee1fa/merged/etc/group: no such file or directory"
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.938203396Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.975719153Z" level=info msg="Created container 5c44fc33f2c6f591d084800a048552c1fe51bc9a96b100574aab26d266ae2d23: kube-system/storage-provisioner/storage-provisioner" id=0f514a3d-0e5c-4065-84ca-1f223d28fd70 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.977362183Z" level=info msg="Starting container: 5c44fc33f2c6f591d084800a048552c1fe51bc9a96b100574aab26d266ae2d23" id=97a65a04-4f72-4aef-b828-02c14f784e70 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.979239848Z" level=info msg="Started container" PID=1650 containerID=5c44fc33f2c6f591d084800a048552c1fe51bc9a96b100574aab26d266ae2d23 description=kube-system/storage-provisioner/storage-provisioner id=97a65a04-4f72-4aef-b828-02c14f784e70 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e41347459c5e3c18cc7c1d6c7a0d6de584709b86f43ba68c936dae3a1e084fd
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.010198568Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.018716576Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.018759595Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.018788533Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.023393094Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.02342982Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.023453097Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.027015579Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.027052559Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.027140351Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.03073032Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.030764739Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5c44fc33f2c6f       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           18 seconds ago       Running             storage-provisioner         2                   2e41347459c5e       storage-provisioner                          kube-system
	1a78dba8bce36       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   0ca84d350c936       dashboard-metrics-scraper-6ffb444bf9-x82d2   kubernetes-dashboard
	c867c5f07d95f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago       Running             kubernetes-dashboard        0                   ac1f4936eac03       kubernetes-dashboard-855c9754f9-kwdms        kubernetes-dashboard
	9aebe2a3b6de1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago       Running             coredns                     1                   fe2bdb28343bf       coredns-66bc5c9577-5gb8d                     kube-system
	d2367a6e58c55       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago       Running             busybox                     1                   d98822b45a80b       busybox                                      default
	8aba6b7c7be44       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago       Running             kindnet-cni                 1                   6329d08a63d37       kindnet-hcdj9                                kube-system
	a4b14efb5df25       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           49 seconds ago       Exited              storage-provisioner         1                   2e41347459c5e       storage-provisioner                          kube-system
	c47e30a501ed7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago       Running             kube-proxy                  1                   00f0f90c14f1a       kube-proxy-59bnq                             kube-system
	1d586c7c3109f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago       Running             kube-controller-manager     1                   5b026e48adcf0       kube-controller-manager-no-preload-800908    kube-system
	bb7e2b0cb0cd0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago       Running             kube-apiserver              1                   71173724e5239       kube-apiserver-no-preload-800908             kube-system
	d72640f599edb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   5ed89208c5638       kube-scheduler-no-preload-800908             kube-system
	1c5a8ad5bc6a5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   845682ac46fa1       etcd-no-preload-800908                       kube-system
	
	
	==> coredns [9aebe2a3b6de103c7f8b9e2bd8f80a05a5befe8af91661090dd46344cae6c829] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45707 - 53196 "HINFO IN 2505686162092607696.3713221052933664493. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03496525s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-800908
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-800908
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=no-preload-800908
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T03_03_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 03:03:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-800908
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 03:05:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 03:05:00 +0000   Wed, 19 Nov 2025 03:03:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 03:05:00 +0000   Wed, 19 Nov 2025 03:03:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 03:05:00 +0000   Wed, 19 Nov 2025 03:03:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 03:05:00 +0000   Wed, 19 Nov 2025 03:03:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-800908
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                792d2464-6007-420a-8ab8-fddc03078e19
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-5gb8d                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 etcd-no-preload-800908                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-hcdj9                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-no-preload-800908              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-800908     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-59bnq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-no-preload-800908              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-x82d2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kwdms         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 113s                   kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Warning  CgroupV1                 2m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node no-preload-800908 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node no-preload-800908 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m13s (x8 over 2m13s)  kubelet          Node no-preload-800908 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m1s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m1s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m                     kubelet          Node no-preload-800908 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m                     kubelet          Node no-preload-800908 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m                     kubelet          Node no-preload-800908 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           116s                   node-controller  Node no-preload-800908 event: Registered Node no-preload-800908 in Controller
	  Normal   NodeReady                98s                    kubelet          Node no-preload-800908 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node no-preload-800908 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node no-preload-800908 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node no-preload-800908 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           46s                    node-controller  Node no-preload-800908 event: Registered Node no-preload-800908 in Controller
	
	
	==> dmesg <==
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	[Nov19 03:00] overlayfs: idmapped layers are currently not supported
	[  +8.385032] overlayfs: idmapped layers are currently not supported
	[Nov19 03:01] overlayfs: idmapped layers are currently not supported
	[  +9.842210] overlayfs: idmapped layers are currently not supported
	[Nov19 03:02] overlayfs: idmapped layers are currently not supported
	[Nov19 03:03] overlayfs: idmapped layers are currently not supported
	[ +33.377847] overlayfs: idmapped layers are currently not supported
	[Nov19 03:04] overlayfs: idmapped layers are currently not supported
	[  +7.075500] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1c5a8ad5bc6a5d13b6cef75a968c097e0e15feaca2933a332cc62792968879fc] <==
	{"level":"warn","ts":"2025-11-19T03:04:35.096868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.185043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.214452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.255419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.325795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.368442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.401286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.423499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.476476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.530034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.589530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.630316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.670528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.778085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.842632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.895687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.962206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:36.002960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:36.051150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:36.255961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:40.417867Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.84165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/view\" limit:1 ","response":"range_response_count:1 size:2208"}
	{"level":"info","ts":"2025-11-19T03:04:40.417940Z","caller":"traceutil/trace.go:172","msg":"trace[2022240257] range","detail":"{range_begin:/registry/clusterroles/view; range_end:; response_count:1; response_revision:485; }","duration":"119.943621ms","start":"2025-11-19T03:04:40.297984Z","end":"2025-11-19T03:04:40.417928Z","steps":["trace[2022240257] 'agreement among raft nodes before linearized reading'  (duration: 119.719619ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T03:04:40.422084Z","caller":"traceutil/trace.go:172","msg":"trace[1435618654] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"255.34206ms","start":"2025-11-19T03:04:40.166718Z","end":"2025-11-19T03:04:40.422060Z","steps":["trace[1435618654] 'process raft request'  (duration: 131.192816ms)","trace[1435618654] 'compare'  (duration: 92.030805ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T03:04:40.432037Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.828819ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-11-19T03:04:40.432090Z","caller":"traceutil/trace.go:172","msg":"trace[856781732] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:485; }","duration":"133.89559ms","start":"2025-11-19T03:04:40.298183Z","end":"2025-11-19T03:04:40.432078Z","steps":["trace[856781732] 'agreement among raft nodes before linearized reading'  (duration: 133.731312ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:05:31 up 10:47,  0 user,  load average: 4.58, 4.30, 3.22
	Linux no-preload-800908 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8aba6b7c7be445a2875873b755efd1399e985179e6a913cc3aefc480b738613c] <==
	I1119 03:04:41.830195       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 03:04:41.830588       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 03:04:41.830797       1 main.go:148] setting mtu 1500 for CNI 
	I1119 03:04:41.830856       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 03:04:41.830897       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T03:04:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 03:04:42.002596       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 03:04:42.026405       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 03:04:42.026566       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 03:04:42.027357       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 03:05:12.011031       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 03:05:12.027926       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 03:05:12.028610       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1119 03:05:12.028815       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1119 03:05:13.327708       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 03:05:13.328388       1 metrics.go:72] Registering metrics
	I1119 03:05:13.328510       1 controller.go:711] "Syncing nftables rules"
	I1119 03:05:22.002282       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 03:05:22.002320       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bb7e2b0cb0cd02d62ac7ad2c37fe309260d9fcd24b72ccd2af687c7b1dcc6ec5] <==
	I1119 03:04:39.079870       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 03:04:39.081141       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 03:04:39.081182       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 03:04:39.083519       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 03:04:39.084744       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 03:04:39.259066       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 03:04:39.259743       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 03:04:39.259759       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 03:04:39.392341       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 03:04:39.402590       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 03:04:39.402928       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 03:04:39.462093       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 03:04:39.463415       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 03:04:39.463791       1 cache.go:39] Caches are synced for autoregister controller
	E1119 03:04:39.551856       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 03:04:40.433695       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 03:04:40.722312       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 03:04:41.177819       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 03:04:41.554492       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 03:04:41.752184       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 03:04:42.305891       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.159.183"}
	I1119 03:04:42.347251       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.18.7"}
	I1119 03:04:45.579814       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 03:04:45.677583       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 03:04:45.782588       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1d586c7c3109fd5ba0aba02ff22f254bea2462e97b24f5d3f134dc24d068e0e6] <==
	I1119 03:04:45.372073       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 03:04:45.378096       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 03:04:45.378903       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 03:04:45.380157       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 03:04:45.385575       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 03:04:45.385621       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 03:04:45.386783       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 03:04:45.389148       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 03:04:45.391117       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 03:04:45.393227       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 03:04:45.402337       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 03:04:45.409601       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 03:04:45.410465       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 03:04:45.411525       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:04:45.416924       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:04:45.420800       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 03:04:45.421097       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 03:04:45.421165       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 03:04:45.430705       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 03:04:45.436851       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:04:45.446194       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 03:04:45.449414       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 03:04:45.471922       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:04:45.471951       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 03:04:45.471959       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c47e30a501ed736547bbb4377e6df1e33a7226c1b2c94803f55b4e972ff18abd] <==
	I1119 03:04:42.304971       1 server_linux.go:53] "Using iptables proxy"
	I1119 03:04:42.487207       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 03:04:42.589818       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 03:04:42.589930       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 03:04:42.590054       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 03:04:42.743744       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 03:04:42.743864       1 server_linux.go:132] "Using iptables Proxier"
	I1119 03:04:42.822212       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 03:04:42.840184       1 server.go:527] "Version info" version="v1.34.1"
	I1119 03:04:42.840290       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:04:42.855716       1 config.go:200] "Starting service config controller"
	I1119 03:04:42.863728       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 03:04:42.856093       1 config.go:309] "Starting node config controller"
	I1119 03:04:42.920657       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 03:04:42.920727       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 03:04:42.920765       1 config.go:106] "Starting endpoint slice config controller"
	I1119 03:04:42.920792       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 03:04:42.928054       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 03:04:42.928992       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 03:04:42.992981       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 03:04:43.028242       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 03:04:43.046693       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d72640f599edb0a7cc747d54663105ae5e186229c7ab646168a63821cf3e3666] <==
	I1119 03:04:37.719231       1 serving.go:386] Generated self-signed cert in-memory
	I1119 03:04:41.943348       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 03:04:41.964046       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:04:42.062810       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 03:04:42.062959       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 03:04:42.063033       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 03:04:42.063050       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:04:42.063126       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:04:42.063271       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:04:42.063328       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:04:42.063138       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 03:04:42.164725       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:04:42.164916       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 03:04:42.165068       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 03:04:41 no-preload-800908 kubelet[785]: W1119 03:04:41.052987     785 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/crio-d98822b45a80bad6fbccbb5813f9f591f13992b803c7833a4d9b579e4f2359f1 WatchSource:0}: Error finding container d98822b45a80bad6fbccbb5813f9f591f13992b803c7833a4d9b579e4f2359f1: Status 404 returned error can't find the container with id d98822b45a80bad6fbccbb5813f9f591f13992b803c7833a4d9b579e4f2359f1
	Nov 19 03:04:46 no-preload-800908 kubelet[785]: I1119 03:04:46.088510     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht4tl\" (UniqueName: \"kubernetes.io/projected/cda3cac0-7b97-4389-83ee-aafe0acf4899-kube-api-access-ht4tl\") pod \"dashboard-metrics-scraper-6ffb444bf9-x82d2\" (UID: \"cda3cac0-7b97-4389-83ee-aafe0acf4899\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2"
	Nov 19 03:04:46 no-preload-800908 kubelet[785]: I1119 03:04:46.089791     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cda3cac0-7b97-4389-83ee-aafe0acf4899-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-x82d2\" (UID: \"cda3cac0-7b97-4389-83ee-aafe0acf4899\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2"
	Nov 19 03:04:46 no-preload-800908 kubelet[785]: I1119 03:04:46.190387     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85sgd\" (UniqueName: \"kubernetes.io/projected/c2f26d02-e618-4b0f-9089-8c76b6e21ca7-kube-api-access-85sgd\") pod \"kubernetes-dashboard-855c9754f9-kwdms\" (UID: \"c2f26d02-e618-4b0f-9089-8c76b6e21ca7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kwdms"
	Nov 19 03:04:46 no-preload-800908 kubelet[785]: I1119 03:04:46.190466     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c2f26d02-e618-4b0f-9089-8c76b6e21ca7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-kwdms\" (UID: \"c2f26d02-e618-4b0f-9089-8c76b6e21ca7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kwdms"
	Nov 19 03:04:46 no-preload-800908 kubelet[785]: W1119 03:04:46.371253     785 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/crio-ac1f4936eac03cd78f3fe92080c152b3648f2f01b6638a7a4ac3b7489e09d041 WatchSource:0}: Error finding container ac1f4936eac03cd78f3fe92080c152b3648f2f01b6638a7a4ac3b7489e09d041: Status 404 returned error can't find the container with id ac1f4936eac03cd78f3fe92080c152b3648f2f01b6638a7a4ac3b7489e09d041
	Nov 19 03:04:52 no-preload-800908 kubelet[785]: I1119 03:04:52.846755     785 scope.go:117] "RemoveContainer" containerID="b389957eee06888fbad7a4b52ed3b5abe168324f4457279c05f614c51e0fbe96"
	Nov 19 03:04:53 no-preload-800908 kubelet[785]: I1119 03:04:53.870918     785 scope.go:117] "RemoveContainer" containerID="b389957eee06888fbad7a4b52ed3b5abe168324f4457279c05f614c51e0fbe96"
	Nov 19 03:04:53 no-preload-800908 kubelet[785]: I1119 03:04:53.873417     785 scope.go:117] "RemoveContainer" containerID="3080995646c4c1115cc00e958b65b6bf12f7dc431aa8b6c75b84f491b2ed1c0c"
	Nov 19 03:04:53 no-preload-800908 kubelet[785]: E1119 03:04:53.874004     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x82d2_kubernetes-dashboard(cda3cac0-7b97-4389-83ee-aafe0acf4899)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2" podUID="cda3cac0-7b97-4389-83ee-aafe0acf4899"
	Nov 19 03:04:54 no-preload-800908 kubelet[785]: I1119 03:04:54.874891     785 scope.go:117] "RemoveContainer" containerID="3080995646c4c1115cc00e958b65b6bf12f7dc431aa8b6c75b84f491b2ed1c0c"
	Nov 19 03:04:54 no-preload-800908 kubelet[785]: E1119 03:04:54.875063     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x82d2_kubernetes-dashboard(cda3cac0-7b97-4389-83ee-aafe0acf4899)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2" podUID="cda3cac0-7b97-4389-83ee-aafe0acf4899"
	Nov 19 03:04:56 no-preload-800908 kubelet[785]: I1119 03:04:56.310410     785 scope.go:117] "RemoveContainer" containerID="3080995646c4c1115cc00e958b65b6bf12f7dc431aa8b6c75b84f491b2ed1c0c"
	Nov 19 03:04:56 no-preload-800908 kubelet[785]: E1119 03:04:56.310578     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x82d2_kubernetes-dashboard(cda3cac0-7b97-4389-83ee-aafe0acf4899)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2" podUID="cda3cac0-7b97-4389-83ee-aafe0acf4899"
	Nov 19 03:05:09 no-preload-800908 kubelet[785]: I1119 03:05:09.382107     785 scope.go:117] "RemoveContainer" containerID="3080995646c4c1115cc00e958b65b6bf12f7dc431aa8b6c75b84f491b2ed1c0c"
	Nov 19 03:05:09 no-preload-800908 kubelet[785]: I1119 03:05:09.916430     785 scope.go:117] "RemoveContainer" containerID="3080995646c4c1115cc00e958b65b6bf12f7dc431aa8b6c75b84f491b2ed1c0c"
	Nov 19 03:05:09 no-preload-800908 kubelet[785]: I1119 03:05:09.916713     785 scope.go:117] "RemoveContainer" containerID="1a78dba8bce368677aa036142ffa9608c3867766e29fb2e1011d917c5d6f239f"
	Nov 19 03:05:09 no-preload-800908 kubelet[785]: E1119 03:05:09.916864     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x82d2_kubernetes-dashboard(cda3cac0-7b97-4389-83ee-aafe0acf4899)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2" podUID="cda3cac0-7b97-4389-83ee-aafe0acf4899"
	Nov 19 03:05:09 no-preload-800908 kubelet[785]: I1119 03:05:09.939651     785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kwdms" podStartSLOduration=13.225386737000001 podStartE2EDuration="24.939634753s" podCreationTimestamp="2025-11-19 03:04:45 +0000 UTC" firstStartedPulling="2025-11-19 03:04:46.377988752 +0000 UTC m=+16.315880448" lastFinishedPulling="2025-11-19 03:04:58.09223676 +0000 UTC m=+28.030128464" observedRunningTime="2025-11-19 03:04:58.90166311 +0000 UTC m=+28.839554814" watchObservedRunningTime="2025-11-19 03:05:09.939634753 +0000 UTC m=+39.877526457"
	Nov 19 03:05:12 no-preload-800908 kubelet[785]: I1119 03:05:12.927055     785 scope.go:117] "RemoveContainer" containerID="a4b14efb5df254be991154d1dfd68e56342ac94b3a3a071d5cdf8aa75b5e2b0a"
	Nov 19 03:05:16 no-preload-800908 kubelet[785]: I1119 03:05:16.310290     785 scope.go:117] "RemoveContainer" containerID="1a78dba8bce368677aa036142ffa9608c3867766e29fb2e1011d917c5d6f239f"
	Nov 19 03:05:16 no-preload-800908 kubelet[785]: E1119 03:05:16.310494     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x82d2_kubernetes-dashboard(cda3cac0-7b97-4389-83ee-aafe0acf4899)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2" podUID="cda3cac0-7b97-4389-83ee-aafe0acf4899"
	Nov 19 03:05:27 no-preload-800908 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 03:05:27 no-preload-800908 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 03:05:27 no-preload-800908 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c867c5f07d95fb9e228a76ad97bd7ec2f39291ceef9462dfcb386be776ad518c] <==
	2025/11/19 03:04:58 Using namespace: kubernetes-dashboard
	2025/11/19 03:04:58 Using in-cluster config to connect to apiserver
	2025/11/19 03:04:58 Using secret token for csrf signing
	2025/11/19 03:04:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 03:04:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 03:04:58 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 03:04:58 Generating JWE encryption key
	2025/11/19 03:04:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 03:04:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 03:04:58 Initializing JWE encryption key from synchronized object
	2025/11/19 03:04:58 Creating in-cluster Sidecar client
	2025/11/19 03:04:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 03:04:58 Serving insecurely on HTTP port: 9090
	2025/11/19 03:05:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 03:04:58 Starting overwatch
	
	
	==> storage-provisioner [5c44fc33f2c6f591d084800a048552c1fe51bc9a96b100574aab26d266ae2d23] <==
	I1119 03:05:12.995522       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 03:05:13.009051       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 03:05:13.009117       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 03:05:13.012419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:16.467082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:20.728066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:24.326099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:27.379335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:30.405701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:30.413275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:05:30.413574       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 03:05:30.413834       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-800908_1607a906-b6a2-4013-9bf6-b35b9e140de0!
	I1119 03:05:30.415645       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba963751-4855-448d-b28c-3b35fd351123", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-800908_1607a906-b6a2-4013-9bf6-b35b9e140de0 became leader
	W1119 03:05:30.426562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:30.433914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:05:30.514331       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-800908_1607a906-b6a2-4013-9bf6-b35b9e140de0!
	
	
	==> storage-provisioner [a4b14efb5df254be991154d1dfd68e56342ac94b3a3a071d5cdf8aa75b5e2b0a] <==
	I1119 03:04:41.946594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 03:05:12.006078       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-800908 -n no-preload-800908
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-800908 -n no-preload-800908: exit status 2 (369.250402ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-800908 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-800908
helpers_test.go:243: (dbg) docker inspect no-preload-800908:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd",
	        "Created": "2025-11-19T03:02:36.622194348Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1676021,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T03:04:22.856247234Z",
	            "FinishedAt": "2025-11-19T03:04:21.83067211Z"
	        },
	        "Image": "sha256:6dfeb5329cc83d126555f88f43b09eef7a09c7f546c9166b94d33747df91b6df",
	        "ResolvConfPath": "/var/lib/docker/containers/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/hostname",
	        "HostsPath": "/var/lib/docker/containers/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/hosts",
	        "LogPath": "/var/lib/docker/containers/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd-json.log",
	        "Name": "/no-preload-800908",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-800908:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-800908",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd",
	                "LowerDir": "/var/lib/docker/overlay2/5f2a991abb1ac9e1d4f1b633bb11e2415ce1437a860a51427c5b7ab54fc65618-init/diff:/var/lib/docker/overlay2/c48d08e2bd245db4e1c5c6447aff9f72126e9377265a1f1172daf5070a059e2a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f2a991abb1ac9e1d4f1b633bb11e2415ce1437a860a51427c5b7ab54fc65618/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f2a991abb1ac9e1d4f1b633bb11e2415ce1437a860a51427c5b7ab54fc65618/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f2a991abb1ac9e1d4f1b633bb11e2415ce1437a860a51427c5b7ab54fc65618/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-800908",
	                "Source": "/var/lib/docker/volumes/no-preload-800908/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-800908",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-800908",
	                "name.minikube.sigs.k8s.io": "no-preload-800908",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c99dc21eaad7e489de2f50de801d37f7251dc481120a18ed507d6cd7bf73eb01",
	            "SandboxKey": "/var/run/docker/netns/c99dc21eaad7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34945"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34946"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34949"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34947"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34948"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-800908": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:32:33:9b:d6:82",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2c1e146c03dfa36d5dc32c1606b9c05b9b637b68e1e65d533d701c41873db1eb",
	                    "EndpointID": "01780b93a59159822b7b9047c4fbf0064a597d1c0dcdf9e1a1796aa8de9e581b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-800908",
	                        "b531313c62c4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-800908 -n no-preload-800908
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-800908 -n no-preload-800908: exit status 2 (350.002913ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-800908 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-800908 logs -n 25: (1.293639944s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-579203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p default-k8s-diff-port-579203                                                                                                                                                                                                               │ default-k8s-diff-port-579203 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p disable-driver-mounts-722439                                                                                                                                                                                                               │ disable-driver-mounts-722439 │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:03 UTC │
	│ image   │ embed-certs-592123 image list --format=json                                                                                                                                                                                                   │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ pause   │ -p embed-certs-592123 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │                     │
	│ delete  │ -p embed-certs-592123                                                                                                                                                                                                                         │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ delete  │ -p embed-certs-592123                                                                                                                                                                                                                         │ embed-certs-592123           │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:02 UTC │
	│ start   │ -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:02 UTC │ 19 Nov 25 03:03 UTC │
	│ addons  │ enable metrics-server -p newest-cni-886248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │                     │
	│ stop    │ -p newest-cni-886248 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:03 UTC │
	│ addons  │ enable dashboard -p newest-cni-886248 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:03 UTC │
	│ start   │ -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:03 UTC │ 19 Nov 25 03:04 UTC │
	│ image   │ newest-cni-886248 image list --format=json                                                                                                                                                                                                    │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:04 UTC │
	│ pause   │ -p newest-cni-886248 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-800908 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │                     │
	│ delete  │ -p newest-cni-886248                                                                                                                                                                                                                          │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:04 UTC │
	│ delete  │ -p newest-cni-886248                                                                                                                                                                                                                          │ newest-cni-886248            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:04 UTC │
	│ stop    │ -p no-preload-800908 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:04 UTC │
	│ start   │ -p auto-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-889743                  │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-800908 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:04 UTC │
	│ start   │ -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:04 UTC │ 19 Nov 25 03:05 UTC │
	│ image   │ no-preload-800908 image list --format=json                                                                                                                                                                                                    │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:05 UTC │ 19 Nov 25 03:05 UTC │
	│ pause   │ -p no-preload-800908 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-800908            │ jenkins │ v1.37.0 │ 19 Nov 25 03:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 03:04:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 03:04:22.477306 1675821 out.go:360] Setting OutFile to fd 1 ...
	I1119 03:04:22.477529 1675821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:04:22.477557 1675821 out.go:374] Setting ErrFile to fd 2...
	I1119 03:04:22.477575 1675821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 03:04:22.477864 1675821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 03:04:22.478277 1675821 out.go:368] Setting JSON to false
	I1119 03:04:22.479264 1675821 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38790,"bootTime":1763482673,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 03:04:22.479357 1675821 start.go:143] virtualization:  
	I1119 03:04:22.482775 1675821 out.go:179] * [no-preload-800908] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 03:04:22.486923 1675821 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 03:04:22.486988 1675821 notify.go:221] Checking for updates...
	I1119 03:04:22.493916 1675821 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 03:04:22.496918 1675821 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:04:22.499739 1675821 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 03:04:22.502679 1675821 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 03:04:22.505662 1675821 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 03:04:22.509211 1675821 config.go:182] Loaded profile config "no-preload-800908": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:04:22.509833 1675821 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 03:04:22.547110 1675821 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 03:04:22.547225 1675821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:04:22.641936 1675821 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 03:04:22.629366043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:04:22.642046 1675821 docker.go:319] overlay module found
	I1119 03:04:22.645358 1675821 out.go:179] * Using the docker driver based on existing profile
	I1119 03:04:22.648203 1675821 start.go:309] selected driver: docker
	I1119 03:04:22.648219 1675821 start.go:930] validating driver "docker" against &{Name:no-preload-800908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:04:22.648326 1675821 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 03:04:22.648985 1675821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 03:04:22.738407 1675821 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 03:04:22.728328916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 03:04:22.738728 1675821 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:04:22.738770 1675821 cni.go:84] Creating CNI manager for ""
	I1119 03:04:22.738823 1675821 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:04:22.738870 1675821 start.go:353] cluster config:
	{Name:no-preload-800908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:04:22.744066 1675821 out.go:179] * Starting "no-preload-800908" primary control-plane node in "no-preload-800908" cluster
	I1119 03:04:22.747796 1675821 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 03:04:22.750846 1675821 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1119 03:04:22.753591 1675821 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:04:22.753649 1675821 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 03:04:22.753726 1675821 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/config.json ...
	I1119 03:04:22.754009 1675821 cache.go:107] acquiring lock: {Name:mkb58f30e5376d33040dfa777b3f8180ea85082b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754093 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1119 03:04:22.754107 1675821 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.158µs
	I1119 03:04:22.754115 1675821 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1119 03:04:22.754127 1675821 cache.go:107] acquiring lock: {Name:mk4427b1057ed3426220ced6aa14c26e167661f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754160 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1119 03:04:22.754170 1675821 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 44.298µs
	I1119 03:04:22.754176 1675821 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1119 03:04:22.754186 1675821 cache.go:107] acquiring lock: {Name:mke3a5e1f8219de1d6d968640b180760e94eaad4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754219 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1119 03:04:22.754229 1675821 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 43.674µs
	I1119 03:04:22.754235 1675821 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1119 03:04:22.754244 1675821 cache.go:107] acquiring lock: {Name:mkc90d3e387ee9423dce3105ec70e08f9a213a9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754276 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1119 03:04:22.754286 1675821 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 42.387µs
	I1119 03:04:22.754292 1675821 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1119 03:04:22.754301 1675821 cache.go:107] acquiring lock: {Name:mk6ffbb0756aa279cf3ba05ddd5e5f7e66e5cbe5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754333 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1119 03:04:22.754342 1675821 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 42.51µs
	I1119 03:04:22.754349 1675821 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1119 03:04:22.754358 1675821 cache.go:107] acquiring lock: {Name:mk88c3661a1e8c3438804e10f7c7d80646d19f18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754392 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1119 03:04:22.754401 1675821 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 44.487µs
	I1119 03:04:22.754407 1675821 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1119 03:04:22.754416 1675821 cache.go:107] acquiring lock: {Name:mk4358ffb1d662d66c4de9c14824434035268345 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754442 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1119 03:04:22.754448 1675821 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 32.213µs
	I1119 03:04:22.754453 1675821 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1119 03:04:22.754461 1675821 cache.go:107] acquiring lock: {Name:mk1d702ebd613a383e3fb22e99729e7baba0b90f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.754486 1675821 cache.go:115] /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1119 03:04:22.754491 1675821 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 30.669µs
	I1119 03:04:22.754496 1675821 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1119 03:04:22.754501 1675821 cache.go:87] Successfully saved all images to host disk.
	I1119 03:04:22.774440 1675821 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1119 03:04:22.774465 1675821 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1119 03:04:22.774477 1675821 cache.go:243] Successfully downloaded all kic artifacts
	I1119 03:04:22.774503 1675821 start.go:360] acquireMachinesLock for no-preload-800908: {Name:mk6bdccc03286e3d7d2db959eee2861a6643234c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 03:04:22.774554 1675821 start.go:364] duration metric: took 32.967µs to acquireMachinesLock for "no-preload-800908"
	I1119 03:04:22.774579 1675821 start.go:96] Skipping create...Using existing machine configuration
	I1119 03:04:22.774584 1675821 fix.go:54] fixHost starting: 
	I1119 03:04:22.774844 1675821 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:04:22.805965 1675821 fix.go:112] recreateIfNeeded on no-preload-800908: state=Stopped err=<nil>
	W1119 03:04:22.805992 1675821 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 03:04:20.790362 1673805 cli_runner.go:164] Run: docker network inspect auto-889743 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:04:20.806078 1673805 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 03:04:20.809828 1673805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:04:20.818965 1673805 kubeadm.go:884] updating cluster {Name:auto-889743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-889743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 03:04:20.819088 1673805 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:04:20.819150 1673805 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:04:20.850420 1673805 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:04:20.850440 1673805 crio.go:433] Images already preloaded, skipping extraction
	I1119 03:04:20.850493 1673805 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:04:20.875573 1673805 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:04:20.875635 1673805 cache_images.go:86] Images are preloaded, skipping loading
	I1119 03:04:20.875650 1673805 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1119 03:04:20.875749 1673805 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-889743 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-889743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 03:04:20.875830 1673805 ssh_runner.go:195] Run: crio config
	I1119 03:04:20.930247 1673805 cni.go:84] Creating CNI manager for ""
	I1119 03:04:20.930381 1673805 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:04:20.930415 1673805 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 03:04:20.930468 1673805 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-889743 NodeName:auto-889743 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 03:04:20.930626 1673805 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-889743"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 03:04:20.930700 1673805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 03:04:20.938283 1673805 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 03:04:20.938397 1673805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 03:04:20.945577 1673805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1119 03:04:20.957321 1673805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 03:04:20.970932 1673805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1119 03:04:20.982970 1673805 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 03:04:20.986301 1673805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:04:20.995387 1673805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:04:21.118536 1673805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:04:21.134578 1673805 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743 for IP: 192.168.76.2
	I1119 03:04:21.134600 1673805 certs.go:195] generating shared ca certs ...
	I1119 03:04:21.134616 1673805 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:21.134810 1673805 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 03:04:21.134873 1673805 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 03:04:21.134886 1673805 certs.go:257] generating profile certs ...
	I1119 03:04:21.134970 1673805 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.key
	I1119 03:04:21.134988 1673805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt with IP's: []
	I1119 03:04:21.647379 1673805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt ...
	I1119 03:04:21.647411 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: {Name:mk4968452fc6432ffbcd75e560a0b055d12d547d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:21.647641 1673805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.key ...
	I1119 03:04:21.647657 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.key: {Name:mk69be173031f6e237aa979eb41f1c630569af27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:21.647763 1673805 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.key.9a6120e7
	I1119 03:04:21.647784 1673805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.crt.9a6120e7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 03:04:21.919533 1673805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.crt.9a6120e7 ...
	I1119 03:04:21.919564 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.crt.9a6120e7: {Name:mk4f448b899e5cfca9dab4b079bc5adff866e432 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:21.919758 1673805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.key.9a6120e7 ...
	I1119 03:04:21.919777 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.key.9a6120e7: {Name:mk82626b592048e91a002fae92b142b076e3f304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:21.919862 1673805 certs.go:382] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.crt.9a6120e7 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.crt
	I1119 03:04:21.919948 1673805 certs.go:386] copying /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.key.9a6120e7 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.key
	I1119 03:04:21.920012 1673805 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.key
	I1119 03:04:21.920031 1673805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.crt with IP's: []
	I1119 03:04:22.295569 1673805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.crt ...
	I1119 03:04:22.295601 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.crt: {Name:mk95e166548be8785a0f5cf96868a38df8721371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:22.295810 1673805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.key ...
	I1119 03:04:22.295826 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.key: {Name:mk9c88e156ce0018059ec1432dda8ee584f6a5e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:22.296046 1673805 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 03:04:22.296101 1673805 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 03:04:22.296117 1673805 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 03:04:22.296158 1673805 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 03:04:22.296197 1673805 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 03:04:22.296220 1673805 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 03:04:22.296281 1673805 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:04:22.296919 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 03:04:22.314116 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 03:04:22.334054 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 03:04:22.354574 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 03:04:22.372525 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1119 03:04:22.391335 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 03:04:22.413425 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 03:04:22.431458 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 03:04:22.450243 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 03:04:22.468733 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 03:04:22.488719 1673805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 03:04:22.512179 1673805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 03:04:22.530969 1673805 ssh_runner.go:195] Run: openssl version
	I1119 03:04:22.541925 1673805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 03:04:22.551297 1673805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:04:22.555425 1673805 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:04:22.555488 1673805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:04:22.598526 1673805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 03:04:22.606786 1673805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 03:04:22.614840 1673805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 03:04:22.619090 1673805 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 03:04:22.619149 1673805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 03:04:22.663889 1673805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 03:04:22.672790 1673805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 03:04:22.686617 1673805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 03:04:22.690340 1673805 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 03:04:22.690408 1673805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 03:04:22.734822 1673805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 03:04:22.743873 1673805 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 03:04:22.747510 1673805 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 03:04:22.747566 1673805 kubeadm.go:401] StartCluster: {Name:auto-889743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-889743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:04:22.747639 1673805 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 03:04:22.747698 1673805 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 03:04:22.793717 1673805 cri.go:89] found id: ""
	I1119 03:04:22.793798 1673805 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 03:04:22.820371 1673805 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 03:04:22.842976 1673805 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 03:04:22.843041 1673805 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 03:04:22.855895 1673805 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 03:04:22.855915 1673805 kubeadm.go:158] found existing configuration files:
	
	I1119 03:04:22.855977 1673805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 03:04:22.867310 1673805 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 03:04:22.867382 1673805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 03:04:22.875322 1673805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 03:04:22.896052 1673805 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 03:04:22.896119 1673805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 03:04:22.906586 1673805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 03:04:22.915811 1673805 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 03:04:22.915871 1673805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 03:04:22.938553 1673805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 03:04:22.951671 1673805 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 03:04:22.951786 1673805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 03:04:22.960341 1673805 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 03:04:23.020295 1673805 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 03:04:23.020436 1673805 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 03:04:23.049092 1673805 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 03:04:23.049184 1673805 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1119 03:04:23.049227 1673805 kubeadm.go:319] OS: Linux
	I1119 03:04:23.049285 1673805 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 03:04:23.049340 1673805 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1119 03:04:23.049396 1673805 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 03:04:23.049469 1673805 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 03:04:23.049624 1673805 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 03:04:23.049721 1673805 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 03:04:23.049807 1673805 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 03:04:23.049904 1673805 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 03:04:23.049963 1673805 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1119 03:04:23.166268 1673805 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 03:04:23.166383 1673805 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 03:04:23.166489 1673805 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 03:04:23.192873 1673805 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 03:04:23.198323 1673805 out.go:252]   - Generating certificates and keys ...
	I1119 03:04:23.198434 1673805 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 03:04:23.198511 1673805 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 03:04:23.877601 1673805 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 03:04:24.375965 1673805 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 03:04:22.811158 1675821 out.go:252] * Restarting existing docker container for "no-preload-800908" ...
	I1119 03:04:22.811244 1675821 cli_runner.go:164] Run: docker start no-preload-800908
	I1119 03:04:23.095393 1675821 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:04:23.133120 1675821 kic.go:430] container "no-preload-800908" state is running.
	I1119 03:04:23.133500 1675821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-800908
	I1119 03:04:23.157043 1675821 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/config.json ...
	I1119 03:04:23.157256 1675821 machine.go:94] provisionDockerMachine start ...
	I1119 03:04:23.157323 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:23.184664 1675821 main.go:143] libmachine: Using SSH client type: native
	I1119 03:04:23.184987 1675821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34945 <nil> <nil>}
	I1119 03:04:23.185006 1675821 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 03:04:23.185706 1675821 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1119 03:04:26.338727 1675821 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-800908
	
	I1119 03:04:26.338817 1675821 ubuntu.go:182] provisioning hostname "no-preload-800908"
	I1119 03:04:26.338934 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:26.367888 1675821 main.go:143] libmachine: Using SSH client type: native
	I1119 03:04:26.368255 1675821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34945 <nil> <nil>}
	I1119 03:04:26.368268 1675821 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-800908 && echo "no-preload-800908" | sudo tee /etc/hostname
	I1119 03:04:26.537478 1675821 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-800908
	
	I1119 03:04:26.537609 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:26.565776 1675821 main.go:143] libmachine: Using SSH client type: native
	I1119 03:04:26.566097 1675821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34945 <nil> <nil>}
	I1119 03:04:26.566120 1675821 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-800908' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-800908/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-800908' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 03:04:26.729876 1675821 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 03:04:26.729903 1675821 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-1463525/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-1463525/.minikube}
	I1119 03:04:26.729932 1675821 ubuntu.go:190] setting up certificates
	I1119 03:04:26.729950 1675821 provision.go:84] configureAuth start
	I1119 03:04:26.730012 1675821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-800908
	I1119 03:04:26.751917 1675821 provision.go:143] copyHostCerts
	I1119 03:04:26.751985 1675821 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem, removing ...
	I1119 03:04:26.752005 1675821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem
	I1119 03:04:26.752090 1675821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.pem (1078 bytes)
	I1119 03:04:26.752197 1675821 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem, removing ...
	I1119 03:04:26.752208 1675821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem
	I1119 03:04:26.752235 1675821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/cert.pem (1123 bytes)
	I1119 03:04:26.752295 1675821 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem, removing ...
	I1119 03:04:26.752304 1675821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem
	I1119 03:04:26.752327 1675821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-1463525/.minikube/key.pem (1675 bytes)
	I1119 03:04:26.752379 1675821 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem org=jenkins.no-preload-800908 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-800908]
	I1119 03:04:27.066523 1675821 provision.go:177] copyRemoteCerts
	I1119 03:04:27.066595 1675821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 03:04:27.066639 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:27.084521 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:27.185910 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 03:04:27.206304 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 03:04:27.225483 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 03:04:27.245359 1675821 provision.go:87] duration metric: took 515.38313ms to configureAuth
	I1119 03:04:27.245386 1675821 ubuntu.go:206] setting minikube options for container-runtime
	I1119 03:04:27.245648 1675821 config.go:182] Loaded profile config "no-preload-800908": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:04:27.245751 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:27.269715 1675821 main.go:143] libmachine: Using SSH client type: native
	I1119 03:04:27.270086 1675821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 34945 <nil> <nil>}
	I1119 03:04:27.270105 1675821 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 03:04:27.702511 1675821 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 03:04:27.702533 1675821 machine.go:97] duration metric: took 4.545264555s to provisionDockerMachine
	I1119 03:04:27.702558 1675821 start.go:293] postStartSetup for "no-preload-800908" (driver="docker")
	I1119 03:04:27.702574 1675821 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 03:04:27.702639 1675821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 03:04:27.702676 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:27.727081 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:27.834258 1675821 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 03:04:27.838159 1675821 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 03:04:27.838184 1675821 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 03:04:27.838200 1675821 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/addons for local assets ...
	I1119 03:04:27.838251 1675821 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-1463525/.minikube/files for local assets ...
	I1119 03:04:27.838330 1675821 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem -> 14653772.pem in /etc/ssl/certs
	I1119 03:04:27.838431 1675821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 03:04:27.846432 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:04:27.865426 1675821 start.go:296] duration metric: took 162.852151ms for postStartSetup
	I1119 03:04:27.865639 1675821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 03:04:27.865713 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:27.887086 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:27.986576 1675821 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 03:04:27.991531 1675821 fix.go:56] duration metric: took 5.216938581s for fixHost
	I1119 03:04:27.991558 1675821 start.go:83] releasing machines lock for "no-preload-800908", held for 5.216989698s
	I1119 03:04:27.991622 1675821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-800908
	I1119 03:04:28.010559 1675821 ssh_runner.go:195] Run: cat /version.json
	I1119 03:04:28.010611 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:28.010631 1675821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 03:04:28.010703 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:28.045903 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:28.061487 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:28.165449 1675821 ssh_runner.go:195] Run: systemctl --version
	I1119 03:04:28.268864 1675821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 03:04:28.310548 1675821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 03:04:28.315751 1675821 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 03:04:28.315842 1675821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 03:04:28.324285 1675821 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 03:04:28.324322 1675821 start.go:496] detecting cgroup driver to use...
	I1119 03:04:28.324352 1675821 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1119 03:04:28.324408 1675821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 03:04:28.339759 1675821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 03:04:28.353545 1675821 docker.go:218] disabling cri-docker service (if available) ...
	I1119 03:04:28.353617 1675821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 03:04:28.369803 1675821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 03:04:28.383942 1675821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 03:04:28.529444 1675821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 03:04:28.672412 1675821 docker.go:234] disabling docker service ...
	I1119 03:04:28.672484 1675821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 03:04:28.687776 1675821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 03:04:28.702110 1675821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 03:04:28.861541 1675821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 03:04:29.019918 1675821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 03:04:29.034606 1675821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 03:04:29.048887 1675821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 03:04:29.048952 1675821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.057944 1675821 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 03:04:29.058030 1675821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.067225 1675821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.076371 1675821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.085632 1675821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 03:04:29.094458 1675821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.103631 1675821 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.112386 1675821 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 03:04:29.121448 1675821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 03:04:29.129853 1675821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 03:04:29.137955 1675821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:04:29.277615 1675821 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 03:04:29.475839 1675821 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 03:04:29.475926 1675821 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 03:04:29.480424 1675821 start.go:564] Will wait 60s for crictl version
	I1119 03:04:29.480496 1675821 ssh_runner.go:195] Run: which crictl
	I1119 03:04:29.484215 1675821 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 03:04:29.527833 1675821 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 03:04:29.527928 1675821 ssh_runner.go:195] Run: crio --version
	I1119 03:04:29.591343 1675821 ssh_runner.go:195] Run: crio --version
	I1119 03:04:29.632174 1675821 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 03:04:25.602531 1673805 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 03:04:26.169128 1673805 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 03:04:28.576219 1673805 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 03:04:28.576482 1673805 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-889743 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 03:04:29.634933 1675821 cli_runner.go:164] Run: docker network inspect no-preload-800908 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 03:04:29.651400 1675821 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 03:04:29.655911 1675821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:04:29.671952 1675821 kubeadm.go:884] updating cluster {Name:no-preload-800908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 03:04:29.672071 1675821 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 03:04:29.672121 1675821 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 03:04:29.715290 1675821 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 03:04:29.715318 1675821 cache_images.go:86] Images are preloaded, skipping loading
	I1119 03:04:29.715326 1675821 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 03:04:29.715426 1675821 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-800908 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 03:04:29.715508 1675821 ssh_runner.go:195] Run: crio config
	I1119 03:04:29.796980 1675821 cni.go:84] Creating CNI manager for ""
	I1119 03:04:29.797013 1675821 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:04:29.797038 1675821 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 03:04:29.797067 1675821 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-800908 NodeName:no-preload-800908 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 03:04:29.797197 1675821 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-800908"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 03:04:29.797272 1675821 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 03:04:29.812171 1675821 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 03:04:29.812253 1675821 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 03:04:29.820591 1675821 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1119 03:04:29.835019 1675821 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 03:04:29.849614 1675821 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 03:04:29.864522 1675821 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 03:04:29.868733 1675821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 03:04:29.879176 1675821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:04:30.034156 1675821 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:04:30.054377 1675821 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908 for IP: 192.168.85.2
	I1119 03:04:30.054415 1675821 certs.go:195] generating shared ca certs ...
	I1119 03:04:30.054432 1675821 certs.go:227] acquiring lock for ca certs: {Name:mk25124d30540ffea11da5f0aa38fb6f55186602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:30.054656 1675821 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key
	I1119 03:04:30.054721 1675821 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key
	I1119 03:04:30.054736 1675821 certs.go:257] generating profile certs ...
	I1119 03:04:30.054862 1675821 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.key
	I1119 03:04:30.054962 1675821 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.key.a073045a
	I1119 03:04:30.055009 1675821 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.key
	I1119 03:04:30.055157 1675821 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem (1338 bytes)
	W1119 03:04:30.055203 1675821 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377_empty.pem, impossibly tiny 0 bytes
	I1119 03:04:30.055218 1675821 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 03:04:30.055244 1675821 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/ca.pem (1078 bytes)
	I1119 03:04:30.055279 1675821 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/cert.pem (1123 bytes)
	I1119 03:04:30.055307 1675821 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/key.pem (1675 bytes)
	I1119 03:04:30.055365 1675821 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem (1708 bytes)
	I1119 03:04:30.056216 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 03:04:30.151786 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 03:04:30.175482 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 03:04:30.211097 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1119 03:04:30.272244 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 03:04:30.320185 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 03:04:30.382620 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 03:04:30.413991 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 03:04:30.437379 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/ssl/certs/14653772.pem --> /usr/share/ca-certificates/14653772.pem (1708 bytes)
	I1119 03:04:30.461365 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 03:04:30.480940 1675821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-1463525/.minikube/certs/1465377.pem --> /usr/share/ca-certificates/1465377.pem (1338 bytes)
	I1119 03:04:30.511513 1675821 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 03:04:30.526389 1675821 ssh_runner.go:195] Run: openssl version
	I1119 03:04:30.532877 1675821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14653772.pem && ln -fs /usr/share/ca-certificates/14653772.pem /etc/ssl/certs/14653772.pem"
	I1119 03:04:30.543953 1675821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14653772.pem
	I1119 03:04:30.547918 1675821 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:04 /usr/share/ca-certificates/14653772.pem
	I1119 03:04:30.547997 1675821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14653772.pem
	I1119 03:04:30.589956 1675821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14653772.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 03:04:30.597875 1675821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 03:04:30.605739 1675821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:04:30.609933 1675821 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:58 /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:04:30.610007 1675821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 03:04:30.651169 1675821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 03:04:30.659132 1675821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1465377.pem && ln -fs /usr/share/ca-certificates/1465377.pem /etc/ssl/certs/1465377.pem"
	I1119 03:04:30.667464 1675821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1465377.pem
	I1119 03:04:30.671601 1675821 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:04 /usr/share/ca-certificates/1465377.pem
	I1119 03:04:30.671675 1675821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1465377.pem
	I1119 03:04:30.714054 1675821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1465377.pem /etc/ssl/certs/51391683.0"
	I1119 03:04:30.722578 1675821 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 03:04:30.726771 1675821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 03:04:30.767728 1675821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 03:04:30.811652 1675821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 03:04:30.888231 1675821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 03:04:30.979272 1675821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 03:04:31.056819 1675821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 03:04:31.134247 1675821 kubeadm.go:401] StartCluster: {Name:no-preload-800908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-800908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 03:04:31.134357 1675821 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 03:04:31.134430 1675821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 03:04:31.196447 1675821 cri.go:89] found id: "bb7e2b0cb0cd02d62ac7ad2c37fe309260d9fcd24b72ccd2af687c7b1dcc6ec5"
	I1119 03:04:31.196475 1675821 cri.go:89] found id: "d72640f599edb0a7cc747d54663105ae5e186229c7ab646168a63821cf3e3666"
	I1119 03:04:31.196480 1675821 cri.go:89] found id: "1c5a8ad5bc6a5d13b6cef75a968c097e0e15feaca2933a332cc62792968879fc"
	I1119 03:04:31.196484 1675821 cri.go:89] found id: ""
	I1119 03:04:31.196541 1675821 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 03:04:31.226724 1675821 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T03:04:31Z" level=error msg="open /run/runc: no such file or directory"
	I1119 03:04:31.226825 1675821 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 03:04:31.255209 1675821 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 03:04:31.255229 1675821 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 03:04:31.255293 1675821 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 03:04:31.283443 1675821 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 03:04:31.283883 1675821 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-800908" does not appear in /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:04:31.284005 1675821 kubeconfig.go:62] /home/jenkins/minikube-integration/21924-1463525/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-800908" cluster setting kubeconfig missing "no-preload-800908" context setting]
	I1119 03:04:31.284313 1675821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:31.287138 1675821 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 03:04:31.321259 1675821 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 03:04:31.321339 1675821 kubeadm.go:602] duration metric: took 66.102195ms to restartPrimaryControlPlane
	I1119 03:04:31.321363 1675821 kubeadm.go:403] duration metric: took 187.125055ms to StartCluster
	I1119 03:04:31.321407 1675821 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:31.321485 1675821 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:04:31.322170 1675821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:31.322442 1675821 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:04:31.322571 1675821 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:04:31.322902 1675821 addons.go:70] Setting storage-provisioner=true in profile "no-preload-800908"
	I1119 03:04:31.322931 1675821 addons.go:239] Setting addon storage-provisioner=true in "no-preload-800908"
	W1119 03:04:31.323032 1675821 addons.go:248] addon storage-provisioner should already be in state true
	I1119 03:04:31.323069 1675821 host.go:66] Checking if "no-preload-800908" exists ...
	I1119 03:04:31.323164 1675821 addons.go:70] Setting dashboard=true in profile "no-preload-800908"
	I1119 03:04:31.323192 1675821 addons.go:239] Setting addon dashboard=true in "no-preload-800908"
	W1119 03:04:31.323210 1675821 addons.go:248] addon dashboard should already be in state true
	I1119 03:04:31.323238 1675821 host.go:66] Checking if "no-preload-800908" exists ...
	I1119 03:04:31.323719 1675821 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:04:31.323786 1675821 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:04:31.322743 1675821 config.go:182] Loaded profile config "no-preload-800908": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:04:31.324299 1675821 addons.go:70] Setting default-storageclass=true in profile "no-preload-800908"
	I1119 03:04:31.324314 1675821 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-800908"
	I1119 03:04:31.324558 1675821 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:04:31.330533 1675821 out.go:179] * Verifying Kubernetes components...
	I1119 03:04:31.337712 1675821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:04:31.389558 1675821 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 03:04:31.392866 1675821 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:04:31.395856 1675821 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:04:31.395877 1675821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:04:31.395939 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:31.396070 1675821 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 03:04:31.398432 1675821 addons.go:239] Setting addon default-storageclass=true in "no-preload-800908"
	W1119 03:04:31.398459 1675821 addons.go:248] addon default-storageclass should already be in state true
	I1119 03:04:31.399396 1675821 host.go:66] Checking if "no-preload-800908" exists ...
	I1119 03:04:31.399470 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 03:04:31.399486 1675821 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 03:04:31.399554 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:31.399906 1675821 cli_runner.go:164] Run: docker container inspect no-preload-800908 --format={{.State.Status}}
	I1119 03:04:31.435951 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:31.454749 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:31.460603 1675821 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:04:31.460628 1675821 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:04:31.460691 1675821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-800908
	I1119 03:04:31.493677 1675821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34945 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/no-preload-800908/id_rsa Username:docker}
	I1119 03:04:31.802265 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 03:04:31.802357 1675821 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 03:04:31.852210 1675821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:04:31.899496 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 03:04:31.899567 1675821 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 03:04:31.907969 1675821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:04:31.930142 1675821 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:04:31.994633 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 03:04:31.994716 1675821 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 03:04:32.117072 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 03:04:32.117144 1675821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 03:04:32.214963 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 03:04:32.215039 1675821 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 03:04:32.314289 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 03:04:32.314363 1675821 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 03:04:32.381856 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 03:04:32.381929 1675821 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 03:04:32.442578 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 03:04:32.442654 1675821 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 03:04:32.475472 1675821 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:04:32.475543 1675821 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 03:04:30.906475 1673805 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 03:04:30.907259 1673805 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-889743 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 03:04:31.524970 1673805 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 03:04:31.977880 1673805 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 03:04:33.261079 1673805 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 03:04:33.261659 1673805 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 03:04:33.988599 1673805 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 03:04:34.308962 1673805 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 03:04:34.842579 1673805 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 03:04:35.661873 1673805 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 03:04:36.322186 1673805 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 03:04:36.323316 1673805 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 03:04:36.326239 1673805 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 03:04:32.518133 1675821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 03:04:36.329571 1673805 out.go:252]   - Booting up control plane ...
	I1119 03:04:36.329677 1673805 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 03:04:36.333872 1673805 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 03:04:36.335689 1673805 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 03:04:36.380991 1673805 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 03:04:36.381105 1673805 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 03:04:36.395764 1673805 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 03:04:36.395869 1673805 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 03:04:36.395912 1673805 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 03:04:36.658923 1673805 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 03:04:36.659052 1673805 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 03:04:37.661858 1673805 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000807847s
	I1119 03:04:37.663415 1673805 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 03:04:37.663769 1673805 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1119 03:04:37.664600 1673805 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 03:04:37.665161 1673805 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 03:04:41.678115 1675821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.825826031s)
	I1119 03:04:41.678170 1675821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.770141712s)
	I1119 03:04:41.678516 1675821 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.748300738s)
	I1119 03:04:41.678553 1675821 node_ready.go:35] waiting up to 6m0s for node "no-preload-800908" to be "Ready" ...
	I1119 03:04:41.835784 1675821 node_ready.go:49] node "no-preload-800908" is "Ready"
	I1119 03:04:41.835832 1675821 node_ready.go:38] duration metric: took 157.251982ms for node "no-preload-800908" to be "Ready" ...
	I1119 03:04:41.835847 1675821 api_server.go:52] waiting for apiserver process to appear ...
	I1119 03:04:41.835921 1675821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 03:04:42.378875 1675821 api_server.go:72] duration metric: took 11.056088729s to wait for apiserver process to appear ...
	I1119 03:04:42.378908 1675821 api_server.go:88] waiting for apiserver healthz status ...
	I1119 03:04:42.378932 1675821 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 03:04:42.379459 1675821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.861231423s)
	I1119 03:04:42.382679 1675821 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-800908 addons enable metrics-server
	
	I1119 03:04:42.385799 1675821 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1119 03:04:42.388899 1675821 addons.go:515] duration metric: took 11.066255609s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1119 03:04:42.407169 1675821 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 03:04:42.407205 1675821 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 03:04:44.288282 1673805 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.623215896s
	I1119 03:04:45.852812 1673805 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.187192755s
	I1119 03:04:46.670007 1673805 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.00579359s
	I1119 03:04:46.700029 1673805 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 03:04:46.719080 1673805 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 03:04:46.738019 1673805 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 03:04:46.738498 1673805 kubeadm.go:319] [mark-control-plane] Marking the node auto-889743 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 03:04:46.753079 1673805 kubeadm.go:319] [bootstrap-token] Using token: izm2gu.you734hy063zwcav
	I1119 03:04:42.879884 1675821 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 03:04:42.892510 1675821 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 03:04:42.893980 1675821 api_server.go:141] control plane version: v1.34.1
	I1119 03:04:42.894008 1675821 api_server.go:131] duration metric: took 515.092166ms to wait for apiserver health ...
	I1119 03:04:42.894018 1675821 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 03:04:42.928361 1675821 system_pods.go:59] 8 kube-system pods found
	I1119 03:04:42.928417 1675821 system_pods.go:61] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:04:42.928428 1675821 system_pods.go:61] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:04:42.928435 1675821 system_pods.go:61] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:04:42.928443 1675821 system_pods.go:61] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 03:04:42.928449 1675821 system_pods.go:61] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:04:42.928455 1675821 system_pods.go:61] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:04:42.928462 1675821 system_pods.go:61] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:04:42.928468 1675821 system_pods.go:61] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 03:04:42.928481 1675821 system_pods.go:74] duration metric: took 34.458088ms to wait for pod list to return data ...
	I1119 03:04:42.928491 1675821 default_sa.go:34] waiting for default service account to be created ...
	I1119 03:04:42.936921 1675821 default_sa.go:45] found service account: "default"
	I1119 03:04:42.936954 1675821 default_sa.go:55] duration metric: took 8.45127ms for default service account to be created ...
	I1119 03:04:42.936964 1675821 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 03:04:43.013602 1675821 system_pods.go:86] 8 kube-system pods found
	I1119 03:04:43.013671 1675821 system_pods.go:89] "coredns-66bc5c9577-5gb8d" [f2cf06c3-a27f-4205-bf83-035adba73690] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 03:04:43.013683 1675821 system_pods.go:89] "etcd-no-preload-800908" [4b2e2353-9488-40c1-a11f-79c5089e6fe1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 03:04:43.013698 1675821 system_pods.go:89] "kindnet-hcdj9" [dc9e982d-8e14-47c6-a9a3-a4502602caa4] Running
	I1119 03:04:43.013706 1675821 system_pods.go:89] "kube-apiserver-no-preload-800908" [3378061b-4194-4784-b307-f948fa017d4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 03:04:43.013716 1675821 system_pods.go:89] "kube-controller-manager-no-preload-800908" [cb7bca27-b010-4e89-adb5-9303f09112c5] Running
	I1119 03:04:43.013721 1675821 system_pods.go:89] "kube-proxy-59bnq" [6b6ee3ab-c31d-447c-895b-d341732cb482] Running
	I1119 03:04:43.013735 1675821 system_pods.go:89] "kube-scheduler-no-preload-800908" [214dd1d7-19ed-477b-8170-e9ddfdc6a14b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 03:04:43.013750 1675821 system_pods.go:89] "storage-provisioner" [41c9b9d6-c070-4f5d-92ec-e0f2baf1609d] Running
	I1119 03:04:43.013758 1675821 system_pods.go:126] duration metric: took 76.78764ms to wait for k8s-apps to be running ...
	I1119 03:04:43.013768 1675821 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 03:04:43.013835 1675821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 03:04:43.036006 1675821 system_svc.go:56] duration metric: took 22.227022ms WaitForService to wait for kubelet
	I1119 03:04:43.036037 1675821 kubeadm.go:587] duration metric: took 11.713270342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 03:04:43.036055 1675821 node_conditions.go:102] verifying NodePressure condition ...
	I1119 03:04:43.039043 1675821 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1119 03:04:43.039077 1675821 node_conditions.go:123] node cpu capacity is 2
	I1119 03:04:43.039089 1675821 node_conditions.go:105] duration metric: took 3.028106ms to run NodePressure ...
	I1119 03:04:43.039110 1675821 start.go:242] waiting for startup goroutines ...
	I1119 03:04:43.039122 1675821 start.go:247] waiting for cluster config update ...
	I1119 03:04:43.039134 1675821 start.go:256] writing updated cluster config ...
	I1119 03:04:43.039456 1675821 ssh_runner.go:195] Run: rm -f paused
	I1119 03:04:43.046737 1675821 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:04:43.051725 1675821 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5gb8d" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 03:04:45.062364 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	I1119 03:04:46.755582 1673805 out.go:252]   - Configuring RBAC rules ...
	I1119 03:04:46.755698 1673805 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 03:04:46.762541 1673805 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 03:04:46.771841 1673805 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 03:04:46.778006 1673805 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 03:04:46.784858 1673805 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 03:04:46.794738 1673805 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 03:04:47.080924 1673805 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 03:04:47.598722 1673805 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 03:04:48.077970 1673805 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 03:04:48.079243 1673805 kubeadm.go:319] 
	I1119 03:04:48.079326 1673805 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 03:04:48.079333 1673805 kubeadm.go:319] 
	I1119 03:04:48.079409 1673805 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 03:04:48.079414 1673805 kubeadm.go:319] 
	I1119 03:04:48.079439 1673805 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 03:04:48.079498 1673805 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 03:04:48.079556 1673805 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 03:04:48.079561 1673805 kubeadm.go:319] 
	I1119 03:04:48.079614 1673805 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 03:04:48.079619 1673805 kubeadm.go:319] 
	I1119 03:04:48.079667 1673805 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 03:04:48.079672 1673805 kubeadm.go:319] 
	I1119 03:04:48.079723 1673805 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 03:04:48.079800 1673805 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 03:04:48.079867 1673805 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 03:04:48.079872 1673805 kubeadm.go:319] 
	I1119 03:04:48.079956 1673805 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 03:04:48.080032 1673805 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 03:04:48.080036 1673805 kubeadm.go:319] 
	I1119 03:04:48.080119 1673805 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token izm2gu.you734hy063zwcav \
	I1119 03:04:48.080222 1673805 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a \
	I1119 03:04:48.080256 1673805 kubeadm.go:319] 	--control-plane 
	I1119 03:04:48.080261 1673805 kubeadm.go:319] 
	I1119 03:04:48.080345 1673805 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 03:04:48.080349 1673805 kubeadm.go:319] 
	I1119 03:04:48.080430 1673805 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token izm2gu.you734hy063zwcav \
	I1119 03:04:48.080532 1673805 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:abb22cc8ae8e186956cff8cc7dabd6326c697e35c4ead85bcd3b5702cdc3f73a 
	I1119 03:04:48.092947 1673805 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1119 03:04:48.093183 1673805 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1119 03:04:48.093288 1673805 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 03:04:48.093318 1673805 cni.go:84] Creating CNI manager for ""
	I1119 03:04:48.093326 1673805 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 03:04:48.096518 1673805 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 03:04:48.100447 1673805 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 03:04:48.108052 1673805 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 03:04:48.108070 1673805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 03:04:48.149152 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 03:04:48.613068 1673805 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 03:04:48.613195 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:48.613274 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-889743 minikube.k8s.io/updated_at=2025_11_19T03_04_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=auto-889743 minikube.k8s.io/primary=true
	I1119 03:04:49.091212 1673805 ops.go:34] apiserver oom_adj: -16
	I1119 03:04:49.091311 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:49.591927 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1119 03:04:47.561526 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:04:50.058398 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	I1119 03:04:50.091721 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:50.591435 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:51.091983 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:51.591428 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:52.092238 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:52.591857 1673805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 03:04:52.817217 1673805 kubeadm.go:1114] duration metric: took 4.20406345s to wait for elevateKubeSystemPrivileges
	I1119 03:04:52.817244 1673805 kubeadm.go:403] duration metric: took 30.069681666s to StartCluster
	I1119 03:04:52.817261 1673805 settings.go:142] acquiring lock: {Name:mk60840cd190596747bdc8f4ec6ab9c30048bf11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:52.817323 1673805 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 03:04:52.818369 1673805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/kubeconfig: {Name:mk569dbb5fa76b01404eb61ebb04e9884665ea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 03:04:52.818560 1673805 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 03:04:52.818640 1673805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 03:04:52.818909 1673805 config.go:182] Loaded profile config "auto-889743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 03:04:52.819027 1673805 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 03:04:52.819087 1673805 addons.go:70] Setting storage-provisioner=true in profile "auto-889743"
	I1119 03:04:52.819101 1673805 addons.go:239] Setting addon storage-provisioner=true in "auto-889743"
	I1119 03:04:52.819124 1673805 host.go:66] Checking if "auto-889743" exists ...
	I1119 03:04:52.819584 1673805 cli_runner.go:164] Run: docker container inspect auto-889743 --format={{.State.Status}}
	I1119 03:04:52.820895 1673805 addons.go:70] Setting default-storageclass=true in profile "auto-889743"
	I1119 03:04:52.820926 1673805 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-889743"
	I1119 03:04:52.821268 1673805 cli_runner.go:164] Run: docker container inspect auto-889743 --format={{.State.Status}}
	I1119 03:04:52.826792 1673805 out.go:179] * Verifying Kubernetes components...
	I1119 03:04:52.832349 1673805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 03:04:52.862574 1673805 addons.go:239] Setting addon default-storageclass=true in "auto-889743"
	I1119 03:04:52.862613 1673805 host.go:66] Checking if "auto-889743" exists ...
	I1119 03:04:52.863169 1673805 cli_runner.go:164] Run: docker container inspect auto-889743 --format={{.State.Status}}
	I1119 03:04:52.873117 1673805 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 03:04:52.878366 1673805 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:04:52.878389 1673805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 03:04:52.878472 1673805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-889743
	I1119 03:04:52.919244 1673805 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 03:04:52.919265 1673805 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 03:04:52.919329 1673805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-889743
	I1119 03:04:52.941992 1673805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34940 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/auto-889743/id_rsa Username:docker}
	I1119 03:04:52.966943 1673805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34940 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/auto-889743/id_rsa Username:docker}
	I1119 03:04:53.377494 1673805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 03:04:53.545109 1673805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 03:04:53.545234 1673805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 03:04:53.573334 1673805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 03:04:54.677924 1673805 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.132657524s)
	I1119 03:04:54.678786 1673805 node_ready.go:35] waiting up to 15m0s for node "auto-889743" to be "Ready" ...
	I1119 03:04:54.678999 1673805 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.133863421s)
	I1119 03:04:54.679020 1673805 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 03:04:54.995063 1673805 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.421692516s)
	I1119 03:04:54.998241 1673805 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1119 03:04:55.001338 1673805 addons.go:515] duration metric: took 2.182306459s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1119 03:04:52.560238 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:04:54.560900 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:04:56.562003 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	I1119 03:04:55.184834 1673805 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-889743" context rescaled to 1 replicas
	W1119 03:04:56.682153 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:04:59.182341 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:04:59.057801 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:05:01.557205 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:05:01.683505 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:04.182653 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:04.057424 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:05:06.058179 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:05:06.681991 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:08.682220 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:08.557071 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:05:10.557736 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	W1119 03:05:11.182556 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:13.182744 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:12.558512 1675821 pod_ready.go:104] pod "coredns-66bc5c9577-5gb8d" is not "Ready", error: <nil>
	I1119 03:05:14.058423 1675821 pod_ready.go:94] pod "coredns-66bc5c9577-5gb8d" is "Ready"
	I1119 03:05:14.058451 1675821 pod_ready.go:86] duration metric: took 31.006688461s for pod "coredns-66bc5c9577-5gb8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.061388 1675821 pod_ready.go:83] waiting for pod "etcd-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.067018 1675821 pod_ready.go:94] pod "etcd-no-preload-800908" is "Ready"
	I1119 03:05:14.067047 1675821 pod_ready.go:86] duration metric: took 5.632347ms for pod "etcd-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.069905 1675821 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.075237 1675821 pod_ready.go:94] pod "kube-apiserver-no-preload-800908" is "Ready"
	I1119 03:05:14.075261 1675821 pod_ready.go:86] duration metric: took 5.325377ms for pod "kube-apiserver-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.077812 1675821 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.256574 1675821 pod_ready.go:94] pod "kube-controller-manager-no-preload-800908" is "Ready"
	I1119 03:05:14.256651 1675821 pod_ready.go:86] duration metric: took 178.810837ms for pod "kube-controller-manager-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.456820 1675821 pod_ready.go:83] waiting for pod "kube-proxy-59bnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:14.855845 1675821 pod_ready.go:94] pod "kube-proxy-59bnq" is "Ready"
	I1119 03:05:14.855872 1675821 pod_ready.go:86] duration metric: took 399.023268ms for pod "kube-proxy-59bnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:15.056311 1675821 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:15.456176 1675821 pod_ready.go:94] pod "kube-scheduler-no-preload-800908" is "Ready"
	I1119 03:05:15.456206 1675821 pod_ready.go:86] duration metric: took 399.869653ms for pod "kube-scheduler-no-preload-800908" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 03:05:15.456220 1675821 pod_ready.go:40] duration metric: took 32.409431473s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 03:05:15.511262 1675821 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1119 03:05:15.516424 1675821 out.go:179] * Done! kubectl is now configured to use "no-preload-800908" cluster and "default" namespace by default
	W1119 03:05:15.681473 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:17.682390 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:20.182528 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:22.682264 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:25.182302 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:27.182430 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	W1119 03:05:29.682826 1673805 node_ready.go:57] node "auto-889743" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 19 03:05:09 no-preload-800908 crio[661]: time="2025-11-19T03:05:09.9309466Z" level=info msg="Removed container 3080995646c4c1115cc00e958b65b6bf12f7dc431aa8b6c75b84f491b2ed1c0c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2/dashboard-metrics-scraper" id=82c59f63-c4ed-4474-aa12-fdec436a56a8 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 03:05:12 no-preload-800908 conmon[1138]: conmon a4b14efb5df254be9911 <ninfo>: container 1158 exited with status 1
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.927921345Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d6944dd3-f9c1-420a-9165-10495c0624d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.928882376Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=482ee2fd-802f-4623-b96b-c450946493b0 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.929828334Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0f514a3d-0e5c-4065-84ca-1f223d28fd70 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.929948716Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.937781132Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.937948643Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1e07bacc1c5d6ccf720bf8db4005c7cc4fffca98a3ecf22af83c604e7c7ee1fa/merged/etc/passwd: no such file or directory"
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.937970583Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1e07bacc1c5d6ccf720bf8db4005c7cc4fffca98a3ecf22af83c604e7c7ee1fa/merged/etc/group: no such file or directory"
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.938203396Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.975719153Z" level=info msg="Created container 5c44fc33f2c6f591d084800a048552c1fe51bc9a96b100574aab26d266ae2d23: kube-system/storage-provisioner/storage-provisioner" id=0f514a3d-0e5c-4065-84ca-1f223d28fd70 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.977362183Z" level=info msg="Starting container: 5c44fc33f2c6f591d084800a048552c1fe51bc9a96b100574aab26d266ae2d23" id=97a65a04-4f72-4aef-b828-02c14f784e70 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 03:05:12 no-preload-800908 crio[661]: time="2025-11-19T03:05:12.979239848Z" level=info msg="Started container" PID=1650 containerID=5c44fc33f2c6f591d084800a048552c1fe51bc9a96b100574aab26d266ae2d23 description=kube-system/storage-provisioner/storage-provisioner id=97a65a04-4f72-4aef-b828-02c14f784e70 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e41347459c5e3c18cc7c1d6c7a0d6de584709b86f43ba68c936dae3a1e084fd
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.010198568Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.018716576Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.018759595Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.018788533Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.023393094Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.02342982Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.023453097Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.027015579Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.027052559Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.027140351Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.03073032Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 03:05:22 no-preload-800908 crio[661]: time="2025-11-19T03:05:22.030764739Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5c44fc33f2c6f       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           20 seconds ago       Running             storage-provisioner         2                   2e41347459c5e       storage-provisioner                          kube-system
	1a78dba8bce36       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   0ca84d350c936       dashboard-metrics-scraper-6ffb444bf9-x82d2   kubernetes-dashboard
	c867c5f07d95f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago       Running             kubernetes-dashboard        0                   ac1f4936eac03       kubernetes-dashboard-855c9754f9-kwdms        kubernetes-dashboard
	9aebe2a3b6de1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago       Running             coredns                     1                   fe2bdb28343bf       coredns-66bc5c9577-5gb8d                     kube-system
	d2367a6e58c55       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   d98822b45a80b       busybox                                      default
	8aba6b7c7be44       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago       Running             kindnet-cni                 1                   6329d08a63d37       kindnet-hcdj9                                kube-system
	a4b14efb5df25       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago       Exited              storage-provisioner         1                   2e41347459c5e       storage-provisioner                          kube-system
	c47e30a501ed7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   00f0f90c14f1a       kube-proxy-59bnq                             kube-system
	1d586c7c3109f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   5b026e48adcf0       kube-controller-manager-no-preload-800908    kube-system
	bb7e2b0cb0cd0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   71173724e5239       kube-apiserver-no-preload-800908             kube-system
	d72640f599edb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   5ed89208c5638       kube-scheduler-no-preload-800908             kube-system
	1c5a8ad5bc6a5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   845682ac46fa1       etcd-no-preload-800908                       kube-system
	
	
	==> coredns [9aebe2a3b6de103c7f8b9e2bd8f80a05a5befe8af91661090dd46344cae6c829] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45707 - 53196 "HINFO IN 2505686162092607696.3713221052933664493. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03496525s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-800908
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-800908
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277
	                    minikube.k8s.io/name=no-preload-800908
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T03_03_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 03:03:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-800908
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 03:05:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 03:05:00 +0000   Wed, 19 Nov 2025 03:03:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 03:05:00 +0000   Wed, 19 Nov 2025 03:03:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 03:05:00 +0000   Wed, 19 Nov 2025 03:03:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 03:05:00 +0000   Wed, 19 Nov 2025 03:03:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-800908
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 952c288fdad5a6f53a4deda5691cff59
	  System UUID:                792d2464-6007-420a-8ab8-fddc03078e19
	  Boot ID:                    b92b1939-fcd0-45dc-ac89-2d161566a71c
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-5gb8d                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     117s
	  kube-system                 etcd-no-preload-800908                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-hcdj9                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      118s
	  kube-system                 kube-apiserver-no-preload-800908              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-no-preload-800908     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-59bnq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-scheduler-no-preload-800908              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-x82d2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kwdms         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 115s                   kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Warning  CgroupV1                 2m15s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m15s (x8 over 2m15s)  kubelet          Node no-preload-800908 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m15s (x8 over 2m15s)  kubelet          Node no-preload-800908 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m15s (x8 over 2m15s)  kubelet          Node no-preload-800908 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m3s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m3s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m2s                   kubelet          Node no-preload-800908 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m2s                   kubelet          Node no-preload-800908 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m2s                   kubelet          Node no-preload-800908 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           118s                   node-controller  Node no-preload-800908 event: Registered Node no-preload-800908 in Controller
	  Normal   NodeReady                100s                   kubelet          Node no-preload-800908 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node no-preload-800908 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node no-preload-800908 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node no-preload-800908 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node no-preload-800908 event: Registered Node no-preload-800908 in Controller
	
	
	==> dmesg <==
	[Nov19 02:42] overlayfs: idmapped layers are currently not supported
	[ +16.386117] overlayfs: idmapped layers are currently not supported
	[Nov19 02:43] overlayfs: idmapped layers are currently not supported
	[ +23.762081] overlayfs: idmapped layers are currently not supported
	[Nov19 02:45] overlayfs: idmapped layers are currently not supported
	[Nov19 02:46] overlayfs: idmapped layers are currently not supported
	[Nov19 02:48] overlayfs: idmapped layers are currently not supported
	[Nov19 02:50] overlayfs: idmapped layers are currently not supported
	[ +30.622614] overlayfs: idmapped layers are currently not supported
	[Nov19 02:53] overlayfs: idmapped layers are currently not supported
	[Nov19 02:55] overlayfs: idmapped layers are currently not supported
	[ +48.629499] overlayfs: idmapped layers are currently not supported
	[Nov19 02:56] overlayfs: idmapped layers are currently not supported
	[ +31.470515] overlayfs: idmapped layers are currently not supported
	[Nov19 02:57] overlayfs: idmapped layers are currently not supported
	[Nov19 02:58] overlayfs: idmapped layers are currently not supported
	[Nov19 03:00] overlayfs: idmapped layers are currently not supported
	[  +8.385032] overlayfs: idmapped layers are currently not supported
	[Nov19 03:01] overlayfs: idmapped layers are currently not supported
	[  +9.842210] overlayfs: idmapped layers are currently not supported
	[Nov19 03:02] overlayfs: idmapped layers are currently not supported
	[Nov19 03:03] overlayfs: idmapped layers are currently not supported
	[ +33.377847] overlayfs: idmapped layers are currently not supported
	[Nov19 03:04] overlayfs: idmapped layers are currently not supported
	[  +7.075500] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1c5a8ad5bc6a5d13b6cef75a968c097e0e15feaca2933a332cc62792968879fc] <==
	{"level":"warn","ts":"2025-11-19T03:04:35.096868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.185043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.214452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.255419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.325795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.368442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.401286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.423499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.476476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.530034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.589530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.630316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.670528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.778085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.842632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.895687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:35.962206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:36.002960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:36.051150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:36.255961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T03:04:40.417867Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.84165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/view\" limit:1 ","response":"range_response_count:1 size:2208"}
	{"level":"info","ts":"2025-11-19T03:04:40.417940Z","caller":"traceutil/trace.go:172","msg":"trace[2022240257] range","detail":"{range_begin:/registry/clusterroles/view; range_end:; response_count:1; response_revision:485; }","duration":"119.943621ms","start":"2025-11-19T03:04:40.297984Z","end":"2025-11-19T03:04:40.417928Z","steps":["trace[2022240257] 'agreement among raft nodes before linearized reading'  (duration: 119.719619ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T03:04:40.422084Z","caller":"traceutil/trace.go:172","msg":"trace[1435618654] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"255.34206ms","start":"2025-11-19T03:04:40.166718Z","end":"2025-11-19T03:04:40.422060Z","steps":["trace[1435618654] 'process raft request'  (duration: 131.192816ms)","trace[1435618654] 'compare'  (duration: 92.030805ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T03:04:40.432037Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.828819ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-11-19T03:04:40.432090Z","caller":"traceutil/trace.go:172","msg":"trace[856781732] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:485; }","duration":"133.89559ms","start":"2025-11-19T03:04:40.298183Z","end":"2025-11-19T03:04:40.432078Z","steps":["trace[856781732] 'agreement among raft nodes before linearized reading'  (duration: 133.731312ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:05:33 up 10:47,  0 user,  load average: 4.58, 4.30, 3.22
	Linux no-preload-800908 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8aba6b7c7be445a2875873b755efd1399e985179e6a913cc3aefc480b738613c] <==
	I1119 03:04:41.830195       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 03:04:41.830588       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 03:04:41.830797       1 main.go:148] setting mtu 1500 for CNI 
	I1119 03:04:41.830856       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 03:04:41.830897       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T03:04:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 03:04:42.002596       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 03:04:42.026405       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 03:04:42.026566       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 03:04:42.027357       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 03:05:12.011031       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 03:05:12.027926       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 03:05:12.028610       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1119 03:05:12.028815       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1119 03:05:13.327708       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 03:05:13.328388       1 metrics.go:72] Registering metrics
	I1119 03:05:13.328510       1 controller.go:711] "Syncing nftables rules"
	I1119 03:05:22.002282       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 03:05:22.002320       1 main.go:301] handling current node
	I1119 03:05:32.002086       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 03:05:32.002135       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bb7e2b0cb0cd02d62ac7ad2c37fe309260d9fcd24b72ccd2af687c7b1dcc6ec5] <==
	I1119 03:04:39.079870       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 03:04:39.081141       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 03:04:39.081182       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 03:04:39.083519       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 03:04:39.084744       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 03:04:39.259066       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 03:04:39.259743       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 03:04:39.259759       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 03:04:39.392341       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 03:04:39.402590       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 03:04:39.402928       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 03:04:39.462093       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 03:04:39.463415       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 03:04:39.463791       1 cache.go:39] Caches are synced for autoregister controller
	E1119 03:04:39.551856       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 03:04:40.433695       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 03:04:40.722312       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 03:04:41.177819       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 03:04:41.554492       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 03:04:41.752184       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 03:04:42.305891       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.159.183"}
	I1119 03:04:42.347251       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.18.7"}
	I1119 03:04:45.579814       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 03:04:45.677583       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 03:04:45.782588       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1d586c7c3109fd5ba0aba02ff22f254bea2462e97b24f5d3f134dc24d068e0e6] <==
	I1119 03:04:45.372073       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 03:04:45.378096       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 03:04:45.378903       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 03:04:45.380157       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 03:04:45.385575       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 03:04:45.385621       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 03:04:45.386783       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 03:04:45.389148       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 03:04:45.391117       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 03:04:45.393227       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 03:04:45.402337       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 03:04:45.409601       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 03:04:45.410465       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 03:04:45.411525       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:04:45.416924       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 03:04:45.420800       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 03:04:45.421097       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 03:04:45.421165       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 03:04:45.430705       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 03:04:45.436851       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:04:45.446194       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 03:04:45.449414       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 03:04:45.471922       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 03:04:45.471951       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 03:04:45.471959       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c47e30a501ed736547bbb4377e6df1e33a7226c1b2c94803f55b4e972ff18abd] <==
	I1119 03:04:42.304971       1 server_linux.go:53] "Using iptables proxy"
	I1119 03:04:42.487207       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 03:04:42.589818       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 03:04:42.589930       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 03:04:42.590054       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 03:04:42.743744       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 03:04:42.743864       1 server_linux.go:132] "Using iptables Proxier"
	I1119 03:04:42.822212       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 03:04:42.840184       1 server.go:527] "Version info" version="v1.34.1"
	I1119 03:04:42.840290       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:04:42.855716       1 config.go:200] "Starting service config controller"
	I1119 03:04:42.863728       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 03:04:42.856093       1 config.go:309] "Starting node config controller"
	I1119 03:04:42.920657       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 03:04:42.920727       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 03:04:42.920765       1 config.go:106] "Starting endpoint slice config controller"
	I1119 03:04:42.920792       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 03:04:42.928054       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 03:04:42.928992       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 03:04:42.992981       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 03:04:43.028242       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 03:04:43.046693       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d72640f599edb0a7cc747d54663105ae5e186229c7ab646168a63821cf3e3666] <==
	I1119 03:04:37.719231       1 serving.go:386] Generated self-signed cert in-memory
	I1119 03:04:41.943348       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 03:04:41.964046       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 03:04:42.062810       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 03:04:42.062959       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 03:04:42.063033       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 03:04:42.063050       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:04:42.063126       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 03:04:42.063271       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:04:42.063328       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:04:42.063138       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 03:04:42.164725       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 03:04:42.164916       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 03:04:42.165068       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 03:04:41 no-preload-800908 kubelet[785]: W1119 03:04:41.052987     785 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/crio-d98822b45a80bad6fbccbb5813f9f591f13992b803c7833a4d9b579e4f2359f1 WatchSource:0}: Error finding container d98822b45a80bad6fbccbb5813f9f591f13992b803c7833a4d9b579e4f2359f1: Status 404 returned error can't find the container with id d98822b45a80bad6fbccbb5813f9f591f13992b803c7833a4d9b579e4f2359f1
	Nov 19 03:04:46 no-preload-800908 kubelet[785]: I1119 03:04:46.088510     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht4tl\" (UniqueName: \"kubernetes.io/projected/cda3cac0-7b97-4389-83ee-aafe0acf4899-kube-api-access-ht4tl\") pod \"dashboard-metrics-scraper-6ffb444bf9-x82d2\" (UID: \"cda3cac0-7b97-4389-83ee-aafe0acf4899\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2"
	Nov 19 03:04:46 no-preload-800908 kubelet[785]: I1119 03:04:46.089791     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cda3cac0-7b97-4389-83ee-aafe0acf4899-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-x82d2\" (UID: \"cda3cac0-7b97-4389-83ee-aafe0acf4899\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2"
	Nov 19 03:04:46 no-preload-800908 kubelet[785]: I1119 03:04:46.190387     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85sgd\" (UniqueName: \"kubernetes.io/projected/c2f26d02-e618-4b0f-9089-8c76b6e21ca7-kube-api-access-85sgd\") pod \"kubernetes-dashboard-855c9754f9-kwdms\" (UID: \"c2f26d02-e618-4b0f-9089-8c76b6e21ca7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kwdms"
	Nov 19 03:04:46 no-preload-800908 kubelet[785]: I1119 03:04:46.190466     785 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c2f26d02-e618-4b0f-9089-8c76b6e21ca7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-kwdms\" (UID: \"c2f26d02-e618-4b0f-9089-8c76b6e21ca7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kwdms"
	Nov 19 03:04:46 no-preload-800908 kubelet[785]: W1119 03:04:46.371253     785 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b531313c62c45257823b101ae62d386ff7365c2550425d41cad2e93644f4aebd/crio-ac1f4936eac03cd78f3fe92080c152b3648f2f01b6638a7a4ac3b7489e09d041 WatchSource:0}: Error finding container ac1f4936eac03cd78f3fe92080c152b3648f2f01b6638a7a4ac3b7489e09d041: Status 404 returned error can't find the container with id ac1f4936eac03cd78f3fe92080c152b3648f2f01b6638a7a4ac3b7489e09d041
	Nov 19 03:04:52 no-preload-800908 kubelet[785]: I1119 03:04:52.846755     785 scope.go:117] "RemoveContainer" containerID="b389957eee06888fbad7a4b52ed3b5abe168324f4457279c05f614c51e0fbe96"
	Nov 19 03:04:53 no-preload-800908 kubelet[785]: I1119 03:04:53.870918     785 scope.go:117] "RemoveContainer" containerID="b389957eee06888fbad7a4b52ed3b5abe168324f4457279c05f614c51e0fbe96"
	Nov 19 03:04:53 no-preload-800908 kubelet[785]: I1119 03:04:53.873417     785 scope.go:117] "RemoveContainer" containerID="3080995646c4c1115cc00e958b65b6bf12f7dc431aa8b6c75b84f491b2ed1c0c"
	Nov 19 03:04:53 no-preload-800908 kubelet[785]: E1119 03:04:53.874004     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x82d2_kubernetes-dashboard(cda3cac0-7b97-4389-83ee-aafe0acf4899)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2" podUID="cda3cac0-7b97-4389-83ee-aafe0acf4899"
	Nov 19 03:04:54 no-preload-800908 kubelet[785]: I1119 03:04:54.874891     785 scope.go:117] "RemoveContainer" containerID="3080995646c4c1115cc00e958b65b6bf12f7dc431aa8b6c75b84f491b2ed1c0c"
	Nov 19 03:04:54 no-preload-800908 kubelet[785]: E1119 03:04:54.875063     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x82d2_kubernetes-dashboard(cda3cac0-7b97-4389-83ee-aafe0acf4899)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2" podUID="cda3cac0-7b97-4389-83ee-aafe0acf4899"
	Nov 19 03:04:56 no-preload-800908 kubelet[785]: I1119 03:04:56.310410     785 scope.go:117] "RemoveContainer" containerID="3080995646c4c1115cc00e958b65b6bf12f7dc431aa8b6c75b84f491b2ed1c0c"
	Nov 19 03:04:56 no-preload-800908 kubelet[785]: E1119 03:04:56.310578     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x82d2_kubernetes-dashboard(cda3cac0-7b97-4389-83ee-aafe0acf4899)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2" podUID="cda3cac0-7b97-4389-83ee-aafe0acf4899"
	Nov 19 03:05:09 no-preload-800908 kubelet[785]: I1119 03:05:09.382107     785 scope.go:117] "RemoveContainer" containerID="3080995646c4c1115cc00e958b65b6bf12f7dc431aa8b6c75b84f491b2ed1c0c"
	Nov 19 03:05:09 no-preload-800908 kubelet[785]: I1119 03:05:09.916430     785 scope.go:117] "RemoveContainer" containerID="3080995646c4c1115cc00e958b65b6bf12f7dc431aa8b6c75b84f491b2ed1c0c"
	Nov 19 03:05:09 no-preload-800908 kubelet[785]: I1119 03:05:09.916713     785 scope.go:117] "RemoveContainer" containerID="1a78dba8bce368677aa036142ffa9608c3867766e29fb2e1011d917c5d6f239f"
	Nov 19 03:05:09 no-preload-800908 kubelet[785]: E1119 03:05:09.916864     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x82d2_kubernetes-dashboard(cda3cac0-7b97-4389-83ee-aafe0acf4899)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2" podUID="cda3cac0-7b97-4389-83ee-aafe0acf4899"
	Nov 19 03:05:09 no-preload-800908 kubelet[785]: I1119 03:05:09.939651     785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kwdms" podStartSLOduration=13.225386737000001 podStartE2EDuration="24.939634753s" podCreationTimestamp="2025-11-19 03:04:45 +0000 UTC" firstStartedPulling="2025-11-19 03:04:46.377988752 +0000 UTC m=+16.315880448" lastFinishedPulling="2025-11-19 03:04:58.09223676 +0000 UTC m=+28.030128464" observedRunningTime="2025-11-19 03:04:58.90166311 +0000 UTC m=+28.839554814" watchObservedRunningTime="2025-11-19 03:05:09.939634753 +0000 UTC m=+39.877526457"
	Nov 19 03:05:12 no-preload-800908 kubelet[785]: I1119 03:05:12.927055     785 scope.go:117] "RemoveContainer" containerID="a4b14efb5df254be991154d1dfd68e56342ac94b3a3a071d5cdf8aa75b5e2b0a"
	Nov 19 03:05:16 no-preload-800908 kubelet[785]: I1119 03:05:16.310290     785 scope.go:117] "RemoveContainer" containerID="1a78dba8bce368677aa036142ffa9608c3867766e29fb2e1011d917c5d6f239f"
	Nov 19 03:05:16 no-preload-800908 kubelet[785]: E1119 03:05:16.310494     785 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x82d2_kubernetes-dashboard(cda3cac0-7b97-4389-83ee-aafe0acf4899)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x82d2" podUID="cda3cac0-7b97-4389-83ee-aafe0acf4899"
	Nov 19 03:05:27 no-preload-800908 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 03:05:27 no-preload-800908 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 03:05:27 no-preload-800908 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c867c5f07d95fb9e228a76ad97bd7ec2f39291ceef9462dfcb386be776ad518c] <==
	2025/11/19 03:04:58 Starting overwatch
	2025/11/19 03:04:58 Using namespace: kubernetes-dashboard
	2025/11/19 03:04:58 Using in-cluster config to connect to apiserver
	2025/11/19 03:04:58 Using secret token for csrf signing
	2025/11/19 03:04:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 03:04:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 03:04:58 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 03:04:58 Generating JWE encryption key
	2025/11/19 03:04:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 03:04:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 03:04:58 Initializing JWE encryption key from synchronized object
	2025/11/19 03:04:58 Creating in-cluster Sidecar client
	2025/11/19 03:04:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 03:04:58 Serving insecurely on HTTP port: 9090
	2025/11/19 03:05:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5c44fc33f2c6f591d084800a048552c1fe51bc9a96b100574aab26d266ae2d23] <==
	I1119 03:05:12.995522       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 03:05:13.009051       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 03:05:13.009117       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 03:05:13.012419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:16.467082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:20.728066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:24.326099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:27.379335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:30.405701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:30.413275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:05:30.413574       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 03:05:30.413834       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-800908_1607a906-b6a2-4013-9bf6-b35b9e140de0!
	I1119 03:05:30.415645       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba963751-4855-448d-b28c-3b35fd351123", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-800908_1607a906-b6a2-4013-9bf6-b35b9e140de0 became leader
	W1119 03:05:30.426562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:30.433914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 03:05:30.514331       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-800908_1607a906-b6a2-4013-9bf6-b35b9e140de0!
	W1119 03:05:32.437887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 03:05:32.444611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a4b14efb5df254be991154d1dfd68e56342ac94b3a3a071d5cdf8aa75b5e2b0a] <==
	I1119 03:04:41.946594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 03:05:12.006078       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-800908 -n no-preload-800908
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-800908 -n no-preload-800908: exit status 2 (421.565808ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-800908 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.13s)

                                                
                                    

Test pass (261/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 38.69
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 10.15
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 168.1
31 TestAddons/serial/GCPAuth/Namespaces 0.22
32 TestAddons/serial/GCPAuth/FakeCredentials 9.8
48 TestAddons/StoppedEnableDisable 12.71
49 TestCertOptions 35.56
50 TestCertExpiration 246.38
52 TestForceSystemdFlag 37.06
53 TestForceSystemdEnv 41.93
58 TestErrorSpam/setup 33.12
59 TestErrorSpam/start 0.79
60 TestErrorSpam/status 1.19
61 TestErrorSpam/pause 6.01
62 TestErrorSpam/unpause 5.56
63 TestErrorSpam/stop 1.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 81.34
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 27.92
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.53
75 TestFunctional/serial/CacheCmd/cache/add_local 1.14
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.79
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
83 TestFunctional/serial/ExtraConfig 37.74
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.44
86 TestFunctional/serial/LogsFileCmd 1.46
87 TestFunctional/serial/InvalidService 3.96
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 10.28
91 TestFunctional/parallel/DryRun 0.53
92 TestFunctional/parallel/InternationalLanguage 0.26
93 TestFunctional/parallel/StatusCmd 1.2
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 26.03
101 TestFunctional/parallel/SSHCmd 0.74
102 TestFunctional/parallel/CpCmd 2.34
104 TestFunctional/parallel/FileSync 0.37
105 TestFunctional/parallel/CertSync 2.18
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
113 TestFunctional/parallel/License 0.55
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.43
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
129 TestFunctional/parallel/MountCmd/any-port 7.95
130 TestFunctional/parallel/MountCmd/specific-port 2.02
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.73
132 TestFunctional/parallel/ServiceCmd/List 0.61
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.18
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.97
144 TestFunctional/parallel/ImageCommands/Setup 0.68
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 182.75
163 TestMultiControlPlane/serial/DeployApp 7.38
164 TestMultiControlPlane/serial/PingHostFromPods 1.57
165 TestMultiControlPlane/serial/AddWorkerNode 60.54
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.16
168 TestMultiControlPlane/serial/CopyFile 19.79
169 TestMultiControlPlane/serial/StopSecondaryNode 12.9
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
171 TestMultiControlPlane/serial/RestartSecondaryNode 31.03
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.18
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 131.75
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.72
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
176 TestMultiControlPlane/serial/StopCluster 36.14
177 TestMultiControlPlane/serial/RestartCluster 75.66
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.86
179 TestMultiControlPlane/serial/AddSecondaryNode 78.07
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
185 TestJSONOutput/start/Command 82.51
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.83
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 69.79
211 TestKicCustomNetwork/use_default_bridge_network 38.46
212 TestKicExistingNetwork 37.03
213 TestKicCustomSubnet 38.91
214 TestKicStaticIP 36.74
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 76.43
219 TestMountStart/serial/StartWithMountFirst 9.17
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 9.65
222 TestMountStart/serial/VerifyMountSecond 0.3
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 8.29
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 137.54
231 TestMultiNode/serial/DeployApp2Nodes 5.2
232 TestMultiNode/serial/PingHostFrom2Pods 0.97
233 TestMultiNode/serial/AddNode 58.28
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.23
237 TestMultiNode/serial/StopNode 2.37
238 TestMultiNode/serial/StartAfterStop 7.82
239 TestMultiNode/serial/RestartKeepsNodes 76.78
240 TestMultiNode/serial/DeleteNode 5.8
241 TestMultiNode/serial/StopMultiNode 24.07
242 TestMultiNode/serial/RestartMultiNode 55.12
243 TestMultiNode/serial/ValidateNameConflict 33.97
248 TestPreload 150.26
250 TestScheduledStopUnix 110.46
253 TestInsufficientStorage 14.57
254 TestRunningBinaryUpgrade 58.96
256 TestKubernetesUpgrade 357.11
257 TestMissingContainerUpgrade 137.67
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 41.11
261 TestNoKubernetes/serial/StartWithStopK8s 11.12
262 TestNoKubernetes/serial/Start 9.5
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
265 TestNoKubernetes/serial/ProfileList 1.21
266 TestNoKubernetes/serial/Stop 1.38
267 TestNoKubernetes/serial/StartNoArgs 8.48
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
269 TestStoppedBinaryUpgrade/Setup 8.2
270 TestStoppedBinaryUpgrade/Upgrade 53.76
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
280 TestPause/serial/Start 82.93
281 TestPause/serial/SecondStartNoReconfiguration 26.38
290 TestNetworkPlugins/group/false 4.01
295 TestStartStop/group/old-k8s-version/serial/FirstStart 64.12
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.45
298 TestStartStop/group/old-k8s-version/serial/Stop 12
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
300 TestStartStop/group/old-k8s-version/serial/SecondStart 48.39
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 91.57
308 TestStartStop/group/embed-certs/serial/FirstStart 86.02
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.42
310 TestStartStop/group/embed-certs/serial/DeployApp 9.32
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.99
314 TestStartStop/group/embed-certs/serial/Stop 12.01
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.15
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
318 TestStartStop/group/embed-certs/serial/SecondStart 55.46
319 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
320 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
321 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
326 TestStartStop/group/no-preload/serial/FirstStart 81
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
330 TestStartStop/group/newest-cni/serial/FirstStart 50.95
331 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/Stop 1.43
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
335 TestStartStop/group/newest-cni/serial/SecondStart 15.39
336 TestStartStop/group/no-preload/serial/DeployApp 10.43
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
342 TestStartStop/group/no-preload/serial/Stop 12.22
343 TestNetworkPlugins/group/auto/Start 88.87
344 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
345 TestStartStop/group/no-preload/serial/SecondStart 53.49
346 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
347 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
348 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
350 TestNetworkPlugins/group/kindnet/Start 86.5
351 TestNetworkPlugins/group/auto/KubeletFlags 0.37
352 TestNetworkPlugins/group/auto/NetCatPod 11.37
353 TestNetworkPlugins/group/auto/DNS 0.2
354 TestNetworkPlugins/group/auto/Localhost 0.19
355 TestNetworkPlugins/group/auto/HairPin 0.17
356 TestNetworkPlugins/group/calico/Start 62.06
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.45
359 TestNetworkPlugins/group/kindnet/NetCatPod 12.31
360 TestNetworkPlugins/group/calico/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/DNS 0.17
362 TestNetworkPlugins/group/kindnet/Localhost 0.13
363 TestNetworkPlugins/group/kindnet/HairPin 0.15
364 TestNetworkPlugins/group/calico/KubeletFlags 0.3
365 TestNetworkPlugins/group/calico/NetCatPod 9.32
366 TestNetworkPlugins/group/calico/DNS 0.26
367 TestNetworkPlugins/group/calico/Localhost 0.27
368 TestNetworkPlugins/group/calico/HairPin 0.18
369 TestNetworkPlugins/group/custom-flannel/Start 63.26
370 TestNetworkPlugins/group/enable-default-cni/Start 78.19
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
373 TestNetworkPlugins/group/custom-flannel/DNS 0.3
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.36
378 TestNetworkPlugins/group/flannel/Start 66.82
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
382 TestNetworkPlugins/group/bridge/Start 52.14
383 TestNetworkPlugins/group/flannel/ControllerPod 6
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
385 TestNetworkPlugins/group/flannel/NetCatPod 11.4
386 TestNetworkPlugins/group/flannel/DNS 0.16
387 TestNetworkPlugins/group/flannel/Localhost 0.14
388 TestNetworkPlugins/group/flannel/HairPin 0.15
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
390 TestNetworkPlugins/group/bridge/NetCatPod 9.26
391 TestNetworkPlugins/group/bridge/DNS 0.24
392 TestNetworkPlugins/group/bridge/Localhost 0.16
393 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.28.0/json-events (38.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-051528 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-051528 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (38.693500909s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (38.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1119 01:57:27.469072 1465377 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1119 01:57:27.469161 1465377 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-051528
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-051528: exit status 85 (82.310897ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-051528 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-051528 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 01:56:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 01:56:48.823260 1465383 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:56:48.823452 1465383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:48.823479 1465383 out.go:374] Setting ErrFile to fd 2...
	I1119 01:56:48.823498 1465383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:56:48.823800 1465383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	W1119 01:56:48.824000 1465383 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21924-1463525/.minikube/config/config.json: open /home/jenkins/minikube-integration/21924-1463525/.minikube/config/config.json: no such file or directory
	I1119 01:56:48.824444 1465383 out.go:368] Setting JSON to true
	I1119 01:56:48.825315 1465383 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34736,"bootTime":1763482673,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 01:56:48.825408 1465383 start.go:143] virtualization:  
	I1119 01:56:48.829482 1465383 out.go:99] [download-only-051528] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1119 01:56:48.829686 1465383 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball: no such file or directory
	I1119 01:56:48.829740 1465383 notify.go:221] Checking for updates...
	I1119 01:56:48.832766 1465383 out.go:171] MINIKUBE_LOCATION=21924
	I1119 01:56:48.835880 1465383 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 01:56:48.838847 1465383 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 01:56:48.841772 1465383 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 01:56:48.844621 1465383 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1119 01:56:48.850427 1465383 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 01:56:48.850752 1465383 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 01:56:48.875401 1465383 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 01:56:48.875510 1465383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:48.942567 1465383 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-19 01:56:48.933454942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 01:56:48.942686 1465383 docker.go:319] overlay module found
	I1119 01:56:48.945623 1465383 out.go:99] Using the docker driver based on user configuration
	I1119 01:56:48.945665 1465383 start.go:309] selected driver: docker
	I1119 01:56:48.945672 1465383 start.go:930] validating driver "docker" against <nil>
	I1119 01:56:48.945783 1465383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:56:49.003359 1465383 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-19 01:56:48.994294401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 01:56:49.003546 1465383 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 01:56:49.003878 1465383 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1119 01:56:49.004052 1465383 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 01:56:49.007193 1465383 out.go:171] Using Docker driver with root privileges
	I1119 01:56:49.010075 1465383 cni.go:84] Creating CNI manager for ""
	I1119 01:56:49.010152 1465383 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:56:49.010166 1465383 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 01:56:49.010263 1465383 start.go:353] cluster config:
	{Name:download-only-051528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-051528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 01:56:49.013365 1465383 out.go:99] Starting "download-only-051528" primary control-plane node in "download-only-051528" cluster
	I1119 01:56:49.013394 1465383 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 01:56:49.016180 1465383 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1119 01:56:49.016243 1465383 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 01:56:49.016327 1465383 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 01:56:49.031850 1465383 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1119 01:56:49.032041 1465383 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1119 01:56:49.032143 1465383 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1119 01:56:49.087731 1465383 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1119 01:56:49.087762 1465383 cache.go:65] Caching tarball of preloaded images
	I1119 01:56:49.087932 1465383 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 01:56:49.091132 1465383 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1119 01:56:49.091152 1465383 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1119 01:56:49.182012 1465383 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1119 01:56:49.182143 1465383 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1119 01:56:54.076513 1465383 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	
	
	* The control-plane node download-only-051528 host does not exist
	  To start a cluster, run: "minikube start -p download-only-051528"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-051528
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (10.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-126461 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-126461 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.146672353s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (10.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1119 01:57:38.048120 1465377 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1119 01:57:38.048161 1465377 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-126461
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-126461: exit status 85 (87.373539ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-051528 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-051528 │ jenkins │ v1.37.0 │ 19 Nov 25 01:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 01:57 UTC │
	│ delete  │ -p download-only-051528                                                                                                                                                   │ download-only-051528 │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │ 19 Nov 25 01:57 UTC │
	│ start   │ -o=json --download-only -p download-only-126461 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-126461 │ jenkins │ v1.37.0 │ 19 Nov 25 01:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 01:57:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 01:57:27.944612 1465580 out.go:360] Setting OutFile to fd 1 ...
	I1119 01:57:27.944720 1465580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:57:27.944730 1465580 out.go:374] Setting ErrFile to fd 2...
	I1119 01:57:27.944736 1465580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 01:57:27.944989 1465580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 01:57:27.945383 1465580 out.go:368] Setting JSON to true
	I1119 01:57:27.946226 1465580 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34775,"bootTime":1763482673,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 01:57:27.946295 1465580 start.go:143] virtualization:  
	I1119 01:57:27.949570 1465580 out.go:99] [download-only-126461] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 01:57:27.949743 1465580 notify.go:221] Checking for updates...
	I1119 01:57:27.952634 1465580 out.go:171] MINIKUBE_LOCATION=21924
	I1119 01:57:27.955598 1465580 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 01:57:27.958381 1465580 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 01:57:27.961240 1465580 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 01:57:27.964180 1465580 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1119 01:57:27.969862 1465580 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 01:57:27.970141 1465580 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 01:57:28.000626 1465580 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 01:57:28.000740 1465580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:57:28.063066 1465580 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 01:57:28.052875784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 01:57:28.063176 1465580 docker.go:319] overlay module found
	I1119 01:57:28.066253 1465580 out.go:99] Using the docker driver based on user configuration
	I1119 01:57:28.066297 1465580 start.go:309] selected driver: docker
	I1119 01:57:28.066305 1465580 start.go:930] validating driver "docker" against <nil>
	I1119 01:57:28.066433 1465580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 01:57:28.121595 1465580 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-19 01:57:28.112464587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 01:57:28.121751 1465580 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 01:57:28.122037 1465580 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1119 01:57:28.122191 1465580 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 01:57:28.125215 1465580 out.go:171] Using Docker driver with root privileges
	I1119 01:57:28.128009 1465580 cni.go:84] Creating CNI manager for ""
	I1119 01:57:28.128074 1465580 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 01:57:28.128087 1465580 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 01:57:28.128194 1465580 start.go:353] cluster config:
	{Name:download-only-126461 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-126461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 01:57:28.131127 1465580 out.go:99] Starting "download-only-126461" primary control-plane node in "download-only-126461" cluster
	I1119 01:57:28.131145 1465580 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 01:57:28.133891 1465580 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1119 01:57:28.133934 1465580 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:57:28.134095 1465580 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1119 01:57:28.149809 1465580 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1119 01:57:28.149930 1465580 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1119 01:57:28.149955 1465580 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1119 01:57:28.149961 1465580 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1119 01:57:28.149968 1465580 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1119 01:57:28.200660 1465580 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 01:57:28.200684 1465580 cache.go:65] Caching tarball of preloaded images
	I1119 01:57:28.200848 1465580 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:57:28.203956 1465580 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1119 01:57:28.203988 1465580 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1119 01:57:28.289601 1465580 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1119 01:57:28.289671 1465580 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1119 01:57:36.868000 1465580 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 01:57:36.868459 1465580 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/download-only-126461/config.json ...
	I1119 01:57:36.868496 1465580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/download-only-126461/config.json: {Name:mk0d1fa81e945e451094614c6a8bc947f1968be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 01:57:36.868688 1465580 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 01:57:36.868854 1465580 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-126461 host does not exist
	  To start a cluster, run: "minikube start -p download-only-126461"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-126461
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1119 01:57:39.179103 1465377 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-689753 --alsologtostderr --binary-mirror http://127.0.0.1:36283 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-689753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-689753
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-238225
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-238225: exit status 85 (71.634072ms)

                                                
                                                
-- stdout --
	* Profile "addons-238225" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-238225"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-238225
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-238225: exit status 85 (75.122564ms)

                                                
                                                
-- stdout --
	* Profile "addons-238225" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-238225"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (168.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-238225 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-238225 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m48.095209879s)
--- PASS: TestAddons/Setup (168.10s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-238225 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-238225 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.8s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-238225 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-238225 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [99db9ee7-11e8-4a19-b431-99c0b121ef76] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [99db9ee7-11e8-4a19-b431-99c0b121ef76] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.00465313s
addons_test.go:694: (dbg) Run:  kubectl --context addons-238225 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-238225 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-238225 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-238225 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-238225
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-238225: (12.244589756s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-238225
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-238225
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-238225
--- PASS: TestAddons/StoppedEnableDisable (12.71s)

                                                
                                    
x
+
TestCertOptions (35.56s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-702842 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-702842 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.698270861s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-702842 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-702842 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-702842 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-702842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-702842
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-702842: (2.132385396s)
--- PASS: TestCertOptions (35.56s)

                                                
                                    
x
+
TestCertExpiration (246.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-422184 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1119 02:55:45.747660 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-422184 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (43.751381112s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-422184 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-422184 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (20.113281578s)
helpers_test.go:175: Cleaning up "cert-expiration-422184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-422184
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-422184: (2.515779568s)
--- PASS: TestCertExpiration (246.38s)

                                                
                                    
x
+
TestForceSystemdFlag (37.06s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-919197 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-919197 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.065905594s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-919197 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-919197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-919197
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-919197: (2.693425844s)
--- PASS: TestForceSystemdFlag (37.06s)

                                                
                                    
x
+
TestForceSystemdEnv (41.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-335811 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-335811 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.900910811s)
helpers_test.go:175: Cleaning up "force-systemd-env-335811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-335811
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-335811: (3.024171215s)
--- PASS: TestForceSystemdEnv (41.93s)

                                                
                                    
x
+
TestErrorSpam/setup (33.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-515671 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-515671 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-515671 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-515671 --driver=docker  --container-runtime=crio: (33.115670068s)
--- PASS: TestErrorSpam/setup (33.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 status
--- PASS: TestErrorSpam/status (1.19s)

                                                
                                    
x
+
TestErrorSpam/pause (6.01s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 pause: exit status 80 (1.787139309s)

                                                
                                                
-- stdout --
	* Pausing node nospam-515671 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:04:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 pause: exit status 80 (1.786833178s)

                                                
                                                
-- stdout --
	* Pausing node nospam-515671 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:04:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 pause: exit status 80 (2.437539205s)

                                                
                                                
-- stdout --
	* Pausing node nospam-515671 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:04:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.01s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 unpause: exit status 80 (1.808931564s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-515671 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:04:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 unpause: exit status 80 (1.747744351s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-515671 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:04:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 unpause: exit status 80 (2.004292664s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-515671 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T02:04:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.56s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 stop: (1.302289337s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515671 --log_dir /tmp/nospam-515671 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21924-1463525/.minikube/files/etc/test/nested/copy/1465377/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-132054 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1119 02:05:29.017637 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:05:29.023948 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:05:29.035259 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:05:29.056574 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:05:29.097893 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:05:29.179227 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:05:29.340654 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:05:29.661958 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:05:30.303929 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:05:31.585294 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:05:34.148221 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:05:39.270150 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:05:49.511613 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:06:09.993000 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-132054 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m21.335234804s)
--- PASS: TestFunctional/serial/StartWithProxy (81.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1119 02:06:20.714511 1465377 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-132054 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-132054 --alsologtostderr -v=8: (27.914521413s)
functional_test.go:678: soft start took 27.914996771s for "functional-132054" cluster.
I1119 02:06:48.629339 1465377 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (27.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-132054 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-132054 cache add registry.k8s.io/pause:3.1: (1.203286264s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 cache add registry.k8s.io/pause:3.3
E1119 02:06:50.954330 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-132054 cache add registry.k8s.io/pause:3.3: (1.184713232s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-132054 cache add registry.k8s.io/pause:latest: (1.145099727s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-132054 /tmp/TestFunctionalserialCacheCmdcacheadd_local878424832/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 cache add minikube-local-cache-test:functional-132054
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 cache delete minikube-local-cache-test:functional-132054
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-132054
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-132054 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.702244ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 kubectl -- --context functional-132054 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-132054 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-132054 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-132054 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.741353018s)
functional_test.go:776: restart took 37.741444806s for "functional-132054" cluster.
I1119 02:07:33.809999 1465377 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (37.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-132054 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-132054 logs: (1.438937685s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 logs --file /tmp/TestFunctionalserialLogsFileCmd1515259564/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-132054 logs --file /tmp/TestFunctionalserialLogsFileCmd1515259564/001/logs.txt: (1.460566598s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-132054 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-132054
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-132054: exit status 115 (388.183711ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32688 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-132054 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-132054 config get cpus: exit status 14 (83.840807ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-132054 config get cpus: exit status 14 (69.817217ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-132054 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-132054 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 1491931: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.28s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-132054 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-132054 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (215.66126ms)

                                                
                                                
-- stdout --
	* [functional-132054] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:18:10.431044 1491423 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:18:10.431181 1491423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:18:10.431192 1491423 out.go:374] Setting ErrFile to fd 2...
	I1119 02:18:10.431263 1491423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:18:10.431560 1491423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:18:10.431978 1491423 out.go:368] Setting JSON to false
	I1119 02:18:10.432985 1491423 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36018,"bootTime":1763482673,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 02:18:10.433053 1491423 start.go:143] virtualization:  
	I1119 02:18:10.442195 1491423 out.go:179] * [functional-132054] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 02:18:10.445319 1491423 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:18:10.445388 1491423 notify.go:221] Checking for updates...
	I1119 02:18:10.450914 1491423 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:18:10.453786 1491423 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:18:10.457486 1491423 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 02:18:10.461219 1491423 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 02:18:10.465162 1491423 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:18:10.468544 1491423 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:18:10.469168 1491423 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:18:10.493266 1491423 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 02:18:10.493373 1491423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:18:10.557830 1491423 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 02:18:10.546162841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:18:10.557935 1491423 docker.go:319] overlay module found
	I1119 02:18:10.561052 1491423 out.go:179] * Using the docker driver based on existing profile
	I1119 02:18:10.563963 1491423 start.go:309] selected driver: docker
	I1119 02:18:10.563988 1491423 start.go:930] validating driver "docker" against &{Name:functional-132054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-132054 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:18:10.564099 1491423 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:18:10.567720 1491423 out.go:203] 
	W1119 02:18:10.570726 1491423 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1119 02:18:10.573884 1491423 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-132054 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-132054 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-132054 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (262.557878ms)

                                                
                                                
-- stdout --
	* [functional-132054] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:18:10.198182 1491323 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:18:10.198410 1491323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:18:10.198441 1491323 out.go:374] Setting ErrFile to fd 2...
	I1119 02:18:10.198460 1491323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:18:10.198868 1491323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:18:10.199276 1491323 out.go:368] Setting JSON to false
	I1119 02:18:10.200250 1491323 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36018,"bootTime":1763482673,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 02:18:10.200352 1491323 start.go:143] virtualization:  
	I1119 02:18:10.204185 1491323 out.go:179] * [functional-132054] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1119 02:18:10.208091 1491323 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:18:10.208176 1491323 notify.go:221] Checking for updates...
	I1119 02:18:10.214366 1491323 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:18:10.217113 1491323 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:18:10.220041 1491323 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 02:18:10.222866 1491323 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 02:18:10.225730 1491323 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:18:10.228974 1491323 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:18:10.229619 1491323 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:18:10.272254 1491323 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 02:18:10.272359 1491323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:18:10.339201 1491323 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-19 02:18:10.330083452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:18:10.339307 1491323 docker.go:319] overlay module found
	I1119 02:18:10.342524 1491323 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1119 02:18:10.345709 1491323 start.go:309] selected driver: docker
	I1119 02:18:10.345735 1491323 start.go:930] validating driver "docker" against &{Name:functional-132054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-132054 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 02:18:10.345838 1491323 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:18:10.349333 1491323 out.go:203] 
	W1119 02:18:10.352147 1491323 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1119 02:18:10.355040 1491323 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [ae0bd49e-56ba-4120-8e99-fc5f7304b945] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002994897s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-132054 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-132054 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-132054 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-132054 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3ba464f9-a39f-46eb-8c4c-40399a20eba8] Pending
helpers_test.go:352: "sp-pod" [3ba464f9-a39f-46eb-8c4c-40399a20eba8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [3ba464f9-a39f-46eb-8c4c-40399a20eba8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003288601s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-132054 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-132054 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-132054 delete -f testdata/storage-provisioner/pod.yaml: (1.054480427s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-132054 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8ed06abb-3772-491f-b558-c2f23448098d] Pending
helpers_test.go:352: "sp-pod" [8ed06abb-3772-491f-b558-c2f23448098d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002872305s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-132054 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh -n functional-132054 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 cp functional-132054:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2172885285/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh -n functional-132054 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh -n functional-132054 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1465377/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "sudo cat /etc/test/nested/copy/1465377/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1465377.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "sudo cat /etc/ssl/certs/1465377.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1465377.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "sudo cat /usr/share/ca-certificates/1465377.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/14653772.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "sudo cat /etc/ssl/certs/14653772.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/14653772.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "sudo cat /usr/share/ca-certificates/14653772.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-132054 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-132054 ssh "sudo systemctl is-active docker": exit status 1 (350.371056ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-132054 ssh "sudo systemctl is-active containerd": exit status 1 (343.357926ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-132054 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-132054 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-132054 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1487742: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-132054 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-132054 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-132054 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [fa13020b-3cb8-450c-ab84-3fca5b06b4c1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [fa13020b-3cb8-450c-ab84-3fca5b06b4c1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00398792s
I1119 02:07:52.104302 1465377 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-132054 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.54.194 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-132054 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "361.296067ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.54454ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "361.747468ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "50.479467ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-132054 /tmp/TestFunctionalparallelMountCmdany-port4046050930/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763518677192000274" to /tmp/TestFunctionalparallelMountCmdany-port4046050930/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763518677192000274" to /tmp/TestFunctionalparallelMountCmdany-port4046050930/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763518677192000274" to /tmp/TestFunctionalparallelMountCmdany-port4046050930/001/test-1763518677192000274
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-132054 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (341.4807ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 02:17:57.533766 1465377 retry.go:31] will retry after 548.919346ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 19 02:17 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 19 02:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 19 02:17 test-1763518677192000274
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh cat /mount-9p/test-1763518677192000274
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-132054 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [771e75e3-e3cc-4c97-9d1e-2f2f0a523384] Pending
helpers_test.go:352: "busybox-mount" [771e75e3-e3cc-4c97-9d1e-2f2f0a523384] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [771e75e3-e3cc-4c97-9d1e-2f2f0a523384] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [771e75e3-e3cc-4c97-9d1e-2f2f0a523384] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003441714s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-132054 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-132054 /tmp/TestFunctionalparallelMountCmdany-port4046050930/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-132054 /tmp/TestFunctionalparallelMountCmdspecific-port107726217/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-132054 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.626405ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 02:18:05.501232 1465377 retry.go:31] will retry after 631.638665ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-132054 /tmp/TestFunctionalparallelMountCmdspecific-port107726217/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-132054 ssh "sudo umount -f /mount-9p": exit status 1 (278.744482ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-132054 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-132054 /tmp/TestFunctionalparallelMountCmdspecific-port107726217/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-132054 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1884852784/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-132054 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1884852784/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-132054 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1884852784/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-132054 ssh "findmnt -T" /mount1: exit status 1 (569.830243ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 02:18:07.743609 1465377 retry.go:31] will retry after 268.983578ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-132054 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-132054 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1884852784/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-132054 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1884852784/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-132054 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1884852784/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 service list -o json
functional_test.go:1504: Took "661.802746ms" to run "out/minikube-linux-arm64 -p functional-132054 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-132054 version -o=json --components: (1.184405947s)
--- PASS: TestFunctional/parallel/Version/components (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-132054 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-132054 image ls --format short --alsologtostderr:
I1119 02:18:25.069990 1493991 out.go:360] Setting OutFile to fd 1 ...
I1119 02:18:25.070271 1493991 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:18:25.070299 1493991 out.go:374] Setting ErrFile to fd 2...
I1119 02:18:25.070327 1493991 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:18:25.070746 1493991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
I1119 02:18:25.071785 1493991 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:18:25.071987 1493991 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:18:25.072718 1493991 cli_runner.go:164] Run: docker container inspect functional-132054 --format={{.State.Status}}
I1119 02:18:25.091442 1493991 ssh_runner.go:195] Run: systemctl --version
I1119 02:18:25.091503 1493991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
I1119 02:18:25.134933 1493991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/functional-132054/id_rsa Username:docker}
I1119 02:18:25.244520 1493991 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-132054 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-132054 image ls --format table --alsologtostderr:
I1119 02:18:25.866484 1494223 out.go:360] Setting OutFile to fd 1 ...
I1119 02:18:25.866660 1494223 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:18:25.866670 1494223 out.go:374] Setting ErrFile to fd 2...
I1119 02:18:25.866675 1494223 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:18:25.866939 1494223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
I1119 02:18:25.867599 1494223 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:18:25.867716 1494223 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:18:25.868196 1494223 cli_runner.go:164] Run: docker container inspect functional-132054 --format={{.State.Status}}
I1119 02:18:25.885003 1494223 ssh_runner.go:195] Run: systemctl --version
I1119 02:18:25.885058 1494223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
I1119 02:18:25.902614 1494223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/functional-132054/id_rsa Username:docker}
I1119 02:18:26.005706 1494223 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-132054 image ls --format json --alsologtostderr:
[{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b
9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad
8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce
8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","do
cker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@s
ha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-132054 image ls --format json --alsologtostderr:
I1119 02:18:25.601054 1494161 out.go:360] Setting OutFile to fd 1 ...
I1119 02:18:25.605018 1494161 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:18:25.605029 1494161 out.go:374] Setting ErrFile to fd 2...
I1119 02:18:25.605034 1494161 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:18:25.605283 1494161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
I1119 02:18:25.607400 1494161 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:18:25.607525 1494161 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:18:25.608059 1494161 cli_runner.go:164] Run: docker container inspect functional-132054 --format={{.State.Status}}
I1119 02:18:25.631075 1494161 ssh_runner.go:195] Run: systemctl --version
I1119 02:18:25.631126 1494161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
I1119 02:18:25.656852 1494161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/functional-132054/id_rsa Username:docker}
I1119 02:18:25.776902 1494161 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-132054 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-132054 image ls --format yaml --alsologtostderr:
I1119 02:18:25.337298 1494089 out.go:360] Setting OutFile to fd 1 ...
I1119 02:18:25.337469 1494089 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:18:25.337481 1494089 out.go:374] Setting ErrFile to fd 2...
I1119 02:18:25.337487 1494089 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:18:25.337798 1494089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
I1119 02:18:25.338533 1494089 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:18:25.338692 1494089 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:18:25.339225 1494089 cli_runner.go:164] Run: docker container inspect functional-132054 --format={{.State.Status}}
I1119 02:18:25.369813 1494089 ssh_runner.go:195] Run: systemctl --version
I1119 02:18:25.369873 1494089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
I1119 02:18:25.391810 1494089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/functional-132054/id_rsa Username:docker}
I1119 02:18:25.493302 1494089 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-132054 ssh pgrep buildkitd: exit status 1 (341.023272ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image build -t localhost/my-image:functional-132054 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-132054 image build -t localhost/my-image:functional-132054 testdata/build --alsologtostderr: (3.380461532s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-132054 image build -t localhost/my-image:functional-132054 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0e1ad210d03
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-132054
--> 9e697dfda04
Successfully tagged localhost/my-image:functional-132054
9e697dfda0450c15fc112b88ee86d42d7ae88c5b92a6f47314d055f268e051ad
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-132054 image build -t localhost/my-image:functional-132054 testdata/build --alsologtostderr:
I1119 02:18:25.432110 1494108 out.go:360] Setting OutFile to fd 1 ...
I1119 02:18:25.433646 1494108 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:18:25.433670 1494108 out.go:374] Setting ErrFile to fd 2...
I1119 02:18:25.433705 1494108 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:18:25.434127 1494108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
I1119 02:18:25.434848 1494108 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:18:25.435629 1494108 config.go:182] Loaded profile config "functional-132054": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 02:18:25.436166 1494108 cli_runner.go:164] Run: docker container inspect functional-132054 --format={{.State.Status}}
I1119 02:18:25.457039 1494108 ssh_runner.go:195] Run: systemctl --version
I1119 02:18:25.457094 1494108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-132054
I1119 02:18:25.477270 1494108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/functional-132054/id_rsa Username:docker}
I1119 02:18:25.592835 1494108 build_images.go:162] Building image from path: /tmp/build.2268363265.tar
I1119 02:18:25.592967 1494108 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1119 02:18:25.602121 1494108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2268363265.tar
I1119 02:18:25.605925 1494108 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2268363265.tar: stat -c "%s %y" /var/lib/minikube/build/build.2268363265.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2268363265.tar': No such file or directory
I1119 02:18:25.605946 1494108 ssh_runner.go:362] scp /tmp/build.2268363265.tar --> /var/lib/minikube/build/build.2268363265.tar (3072 bytes)
I1119 02:18:25.626126 1494108 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2268363265
I1119 02:18:25.637454 1494108 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2268363265 -xf /var/lib/minikube/build/build.2268363265.tar
I1119 02:18:25.647097 1494108 crio.go:315] Building image: /var/lib/minikube/build/build.2268363265
I1119 02:18:25.647183 1494108 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-132054 /var/lib/minikube/build/build.2268363265 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1119 02:18:28.711923 1494108 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-132054 /var/lib/minikube/build/build.2268363265 --cgroup-manager=cgroupfs: (3.064717235s)
I1119 02:18:28.711999 1494108 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2268363265
I1119 02:18:28.720692 1494108 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2268363265.tar
I1119 02:18:28.728172 1494108 build_images.go:218] Built localhost/my-image:functional-132054 from /tmp/build.2268363265.tar
I1119 02:18:28.728199 1494108 build_images.go:134] succeeded building to: functional-132054
I1119 02:18:28.728204 1494108 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-132054
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image rm kicbase/echo-server:functional-132054 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-132054 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-132054
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-132054
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-132054
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (182.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1119 02:20:29.011729 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m1.870713704s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (182.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 kubectl -- rollout status deployment/busybox: (4.689874478s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-7clwh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-g67vb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-k6jlt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-7clwh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-g67vb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-k6jlt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-7clwh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-g67vb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-k6jlt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-7clwh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-7clwh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-g67vb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-g67vb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-k6jlt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 kubectl -- exec busybox-7b57f96db7-k6jlt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 node add --alsologtostderr -v 5
E1119 02:21:52.081033 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:22:42.679853 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:22:42.686203 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:22:42.697620 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:22:42.719093 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:22:42.760523 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:22:42.841920 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 node add --alsologtostderr -v 5: (59.48921997s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 status --alsologtostderr -v 5
E1119 02:22:43.004250 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:22:43.325718 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:22:43.967975 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 status --alsologtostderr -v 5: (1.052414545s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-449095 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1119 02:22:45.249896 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.158551401s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 status --output json --alsologtostderr -v 5: (1.017453509s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp testdata/cp-test.txt ha-449095:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1820458948/001/cp-test_ha-449095.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095:/home/docker/cp-test.txt ha-449095-m02:/home/docker/cp-test_ha-449095_ha-449095-m02.txt
E1119 02:22:47.811861 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m02 "sudo cat /home/docker/cp-test_ha-449095_ha-449095-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095:/home/docker/cp-test.txt ha-449095-m03:/home/docker/cp-test_ha-449095_ha-449095-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m03 "sudo cat /home/docker/cp-test_ha-449095_ha-449095-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095:/home/docker/cp-test.txt ha-449095-m04:/home/docker/cp-test_ha-449095_ha-449095-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m04 "sudo cat /home/docker/cp-test_ha-449095_ha-449095-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp testdata/cp-test.txt ha-449095-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1820458948/001/cp-test_ha-449095-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095-m02:/home/docker/cp-test.txt ha-449095:/home/docker/cp-test_ha-449095-m02_ha-449095.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m02 "sudo cat /home/docker/cp-test.txt"
E1119 02:22:52.933491 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095 "sudo cat /home/docker/cp-test_ha-449095-m02_ha-449095.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095-m02:/home/docker/cp-test.txt ha-449095-m03:/home/docker/cp-test_ha-449095-m02_ha-449095-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m03 "sudo cat /home/docker/cp-test_ha-449095-m02_ha-449095-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095-m02:/home/docker/cp-test.txt ha-449095-m04:/home/docker/cp-test_ha-449095-m02_ha-449095-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m04 "sudo cat /home/docker/cp-test_ha-449095-m02_ha-449095-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp testdata/cp-test.txt ha-449095-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1820458948/001/cp-test_ha-449095-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095-m03:/home/docker/cp-test.txt ha-449095:/home/docker/cp-test_ha-449095-m03_ha-449095.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095 "sudo cat /home/docker/cp-test_ha-449095-m03_ha-449095.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095-m03:/home/docker/cp-test.txt ha-449095-m02:/home/docker/cp-test_ha-449095-m03_ha-449095-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m02 "sudo cat /home/docker/cp-test_ha-449095-m03_ha-449095-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095-m03:/home/docker/cp-test.txt ha-449095-m04:/home/docker/cp-test_ha-449095-m03_ha-449095-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m04 "sudo cat /home/docker/cp-test_ha-449095-m03_ha-449095-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp testdata/cp-test.txt ha-449095-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1820458948/001/cp-test_ha-449095-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095-m04:/home/docker/cp-test.txt ha-449095:/home/docker/cp-test_ha-449095-m04_ha-449095.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095 "sudo cat /home/docker/cp-test_ha-449095-m04_ha-449095.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095-m04:/home/docker/cp-test.txt ha-449095-m02:/home/docker/cp-test_ha-449095-m04_ha-449095-m02.txt
E1119 02:23:03.176013 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m02 "sudo cat /home/docker/cp-test_ha-449095-m04_ha-449095-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 cp ha-449095-m04:/home/docker/cp-test.txt ha-449095-m03:/home/docker/cp-test_ha-449095-m04_ha-449095-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 ssh -n ha-449095-m03 "sudo cat /home/docker/cp-test_ha-449095-m04_ha-449095-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 node stop m02 --alsologtostderr -v 5: (12.087242446s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-449095 status --alsologtostderr -v 5: exit status 7 (809.072342ms)

                                                
                                                
-- stdout --
	ha-449095
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-449095-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-449095-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-449095-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:23:17.237321 1508984 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:23:17.237449 1508984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:23:17.237454 1508984 out.go:374] Setting ErrFile to fd 2...
	I1119 02:23:17.237459 1508984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:23:17.240082 1508984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:23:17.240317 1508984 out.go:368] Setting JSON to false
	I1119 02:23:17.240338 1508984 mustload.go:66] Loading cluster: ha-449095
	I1119 02:23:17.240745 1508984 config.go:182] Loaded profile config "ha-449095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:23:17.240755 1508984 status.go:174] checking status of ha-449095 ...
	I1119 02:23:17.241309 1508984 cli_runner.go:164] Run: docker container inspect ha-449095 --format={{.State.Status}}
	I1119 02:23:17.241731 1508984 notify.go:221] Checking for updates...
	I1119 02:23:17.264636 1508984 status.go:371] ha-449095 host status = "Running" (err=<nil>)
	I1119 02:23:17.264662 1508984 host.go:66] Checking if "ha-449095" exists ...
	I1119 02:23:17.264957 1508984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-449095
	I1119 02:23:17.297722 1508984 host.go:66] Checking if "ha-449095" exists ...
	I1119 02:23:17.298022 1508984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:23:17.298074 1508984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-449095
	I1119 02:23:17.327853 1508984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34629 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/ha-449095/id_rsa Username:docker}
	I1119 02:23:17.427544 1508984 ssh_runner.go:195] Run: systemctl --version
	I1119 02:23:17.434122 1508984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:23:17.447617 1508984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:23:17.527535 1508984 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-19 02:23:17.517443496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:23:17.528089 1508984 kubeconfig.go:125] found "ha-449095" server: "https://192.168.49.254:8443"
	I1119 02:23:17.528130 1508984 api_server.go:166] Checking apiserver status ...
	I1119 02:23:17.528190 1508984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:23:17.540434 1508984 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	I1119 02:23:17.549072 1508984 api_server.go:182] apiserver freezer: "10:freezer:/docker/abaa2f8312ae4960b1781e18d7803c6c91439acc516627497381340e0fa485a8/crio/crio-5b9d5a56161f92fd97075709df62fa2fe8deb5007405ebcfd22b56377c61d564"
	I1119 02:23:17.549148 1508984 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/abaa2f8312ae4960b1781e18d7803c6c91439acc516627497381340e0fa485a8/crio/crio-5b9d5a56161f92fd97075709df62fa2fe8deb5007405ebcfd22b56377c61d564/freezer.state
	I1119 02:23:17.557133 1508984 api_server.go:204] freezer state: "THAWED"
	I1119 02:23:17.557163 1508984 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 02:23:17.565598 1508984 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 02:23:17.565627 1508984 status.go:463] ha-449095 apiserver status = Running (err=<nil>)
	I1119 02:23:17.565638 1508984 status.go:176] ha-449095 status: &{Name:ha-449095 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:23:17.565669 1508984 status.go:174] checking status of ha-449095-m02 ...
	I1119 02:23:17.565992 1508984 cli_runner.go:164] Run: docker container inspect ha-449095-m02 --format={{.State.Status}}
	I1119 02:23:17.584134 1508984 status.go:371] ha-449095-m02 host status = "Stopped" (err=<nil>)
	I1119 02:23:17.584158 1508984 status.go:384] host is not running, skipping remaining checks
	I1119 02:23:17.584164 1508984 status.go:176] ha-449095-m02 status: &{Name:ha-449095-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:23:17.584189 1508984 status.go:174] checking status of ha-449095-m03 ...
	I1119 02:23:17.584520 1508984 cli_runner.go:164] Run: docker container inspect ha-449095-m03 --format={{.State.Status}}
	I1119 02:23:17.602935 1508984 status.go:371] ha-449095-m03 host status = "Running" (err=<nil>)
	I1119 02:23:17.602961 1508984 host.go:66] Checking if "ha-449095-m03" exists ...
	I1119 02:23:17.603347 1508984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-449095-m03
	I1119 02:23:17.631767 1508984 host.go:66] Checking if "ha-449095-m03" exists ...
	I1119 02:23:17.632087 1508984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:23:17.632129 1508984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-449095-m03
	I1119 02:23:17.655442 1508984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/ha-449095-m03/id_rsa Username:docker}
	I1119 02:23:17.755560 1508984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:23:17.770369 1508984 kubeconfig.go:125] found "ha-449095" server: "https://192.168.49.254:8443"
	I1119 02:23:17.770399 1508984 api_server.go:166] Checking apiserver status ...
	I1119 02:23:17.770443 1508984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:23:17.781875 1508984 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	I1119 02:23:17.790592 1508984 api_server.go:182] apiserver freezer: "10:freezer:/docker/32ec496283d0e6d878f062efa360f69cdcfdcae51f080e92e42fd380b7803d76/crio/crio-642b0d98b24ed55ca79c5a1c75338e38e82b6cde283eafede8c1746720abfd0f"
	I1119 02:23:17.790690 1508984 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/32ec496283d0e6d878f062efa360f69cdcfdcae51f080e92e42fd380b7803d76/crio/crio-642b0d98b24ed55ca79c5a1c75338e38e82b6cde283eafede8c1746720abfd0f/freezer.state
	I1119 02:23:17.798829 1508984 api_server.go:204] freezer state: "THAWED"
	I1119 02:23:17.798874 1508984 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 02:23:17.808744 1508984 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 02:23:17.808783 1508984 status.go:463] ha-449095-m03 apiserver status = Running (err=<nil>)
	I1119 02:23:17.808794 1508984 status.go:176] ha-449095-m03 status: &{Name:ha-449095-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:23:17.808827 1508984 status.go:174] checking status of ha-449095-m04 ...
	I1119 02:23:17.809151 1508984 cli_runner.go:164] Run: docker container inspect ha-449095-m04 --format={{.State.Status}}
	I1119 02:23:17.826436 1508984 status.go:371] ha-449095-m04 host status = "Running" (err=<nil>)
	I1119 02:23:17.826461 1508984 host.go:66] Checking if "ha-449095-m04" exists ...
	I1119 02:23:17.826841 1508984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-449095-m04
	I1119 02:23:17.848613 1508984 host.go:66] Checking if "ha-449095-m04" exists ...
	I1119 02:23:17.848921 1508984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:23:17.848971 1508984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-449095-m04
	I1119 02:23:17.869781 1508984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34644 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/ha-449095-m04/id_rsa Username:docker}
	I1119 02:23:17.974945 1508984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:23:17.988570 1508984 status.go:176] ha-449095-m04 status: &{Name:ha-449095-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 node start m02 --alsologtostderr -v 5
E1119 02:23:23.657651 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 node start m02 --alsologtostderr -v 5: (29.74308917s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 status --alsologtostderr -v 5: (1.163307744s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.17981463s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (131.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 stop --alsologtostderr -v 5
E1119 02:24:04.619554 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 stop --alsologtostderr -v 5: (27.02689858s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 start --wait true --alsologtostderr -v 5
E1119 02:25:26.541565 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:25:29.009155 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 start --wait true --alsologtostderr -v 5: (1m44.539109606s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (131.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 node delete m03 --alsologtostderr -v 5: (11.770698602s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 stop --alsologtostderr -v 5: (36.029901372s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-449095 status --alsologtostderr -v 5: exit status 7 (112.138947ms)

                                                
                                                
-- stdout --
	ha-449095
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-449095-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-449095-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:26:52.333398 1520875 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:26:52.333595 1520875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:26:52.333627 1520875 out.go:374] Setting ErrFile to fd 2...
	I1119 02:26:52.333648 1520875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:26:52.333919 1520875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:26:52.334161 1520875 out.go:368] Setting JSON to false
	I1119 02:26:52.334224 1520875 mustload.go:66] Loading cluster: ha-449095
	I1119 02:26:52.334293 1520875 notify.go:221] Checking for updates...
	I1119 02:26:52.334679 1520875 config.go:182] Loaded profile config "ha-449095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:26:52.334711 1520875 status.go:174] checking status of ha-449095 ...
	I1119 02:26:52.335257 1520875 cli_runner.go:164] Run: docker container inspect ha-449095 --format={{.State.Status}}
	I1119 02:26:52.353714 1520875 status.go:371] ha-449095 host status = "Stopped" (err=<nil>)
	I1119 02:26:52.353733 1520875 status.go:384] host is not running, skipping remaining checks
	I1119 02:26:52.353739 1520875 status.go:176] ha-449095 status: &{Name:ha-449095 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:26:52.353769 1520875 status.go:174] checking status of ha-449095-m02 ...
	I1119 02:26:52.354156 1520875 cli_runner.go:164] Run: docker container inspect ha-449095-m02 --format={{.State.Status}}
	I1119 02:26:52.371233 1520875 status.go:371] ha-449095-m02 host status = "Stopped" (err=<nil>)
	I1119 02:26:52.371252 1520875 status.go:384] host is not running, skipping remaining checks
	I1119 02:26:52.371271 1520875 status.go:176] ha-449095-m02 status: &{Name:ha-449095-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:26:52.371291 1520875 status.go:174] checking status of ha-449095-m04 ...
	I1119 02:26:52.371590 1520875 cli_runner.go:164] Run: docker container inspect ha-449095-m04 --format={{.State.Status}}
	I1119 02:26:52.397953 1520875 status.go:371] ha-449095-m04 host status = "Stopped" (err=<nil>)
	I1119 02:26:52.397972 1520875 status.go:384] host is not running, skipping remaining checks
	I1119 02:26:52.397978 1520875 status.go:176] ha-449095-m04 status: &{Name:ha-449095-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (75.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1119 02:27:42.679378 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m14.715019629s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (75.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 node add --control-plane --alsologtostderr -v 5
E1119 02:28:10.383495 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 node add --control-plane --alsologtostderr -v 5: (1m17.03162946s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-449095 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-449095 status --alsologtostderr -v 5: (1.03592995s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.030431087s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.51s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-755140 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1119 02:30:29.010434 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-755140 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m22.509785033s)
--- PASS: TestJSONOutput/start/Command (82.51s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-755140 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-755140 --output=json --user=testUser: (5.830637776s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-195251 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-195251 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (93.924433ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8237c015-c420-4837-9d9b-1e7b0d63e35f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-195251] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"53017bdc-41a2-4c0f-96d8-568ca60b6089","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21924"}}
	{"specversion":"1.0","id":"9599878e-7060-42b4-a2de-938a0a6f1644","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"314f3024-9a86-48d3-8525-96c686ea9160","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig"}}
	{"specversion":"1.0","id":"656c684c-7301-42c6-b1ef-2afdac173ded","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube"}}
	{"specversion":"1.0","id":"d8cb7c74-11c2-4a30-a340-b24a376ee650","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ce96a0f3-2dd3-454e-8a25-1923cf2c459a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"389e3724-eba0-4fda-81b1-4b7d7fc6e31f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-195251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-195251
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (69.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-443912 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-443912 --network=: (1m7.469327127s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-443912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-443912
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-443912: (2.300329504s)
--- PASS: TestKicCustomNetwork/create_custom_network (69.79s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-000812 --network=bridge
E1119 02:32:42.679505 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-000812 --network=bridge: (36.295875764s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-000812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-000812
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-000812: (2.139235186s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.46s)

                                                
                                    
x
+
TestKicExistingNetwork (37.03s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1119 02:33:01.619163 1465377 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1119 02:33:01.635010 1465377 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1119 02:33:01.635865 1465377 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1119 02:33:01.635906 1465377 cli_runner.go:164] Run: docker network inspect existing-network
W1119 02:33:01.650314 1465377 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1119 02:33:01.650344 1465377 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1119 02:33:01.650363 1465377 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1119 02:33:01.650468 1465377 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1119 02:33:01.668379 1465377 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-30778cc553ec IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:62:24:59:d9:05:e6} reservation:<nil>}
I1119 02:33:01.668715 1465377 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400034f920}
I1119 02:33:01.668747 1465377 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1119 02:33:01.668800 1465377 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1119 02:33:01.722921 1465377 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-947268 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-947268 --network=existing-network: (34.81111561s)
helpers_test.go:175: Cleaning up "existing-network-947268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-947268
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-947268: (2.08180821s)
I1119 02:33:38.632502 1465377 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.03s)

                                                
                                    
x
+
TestKicCustomSubnet (38.91s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-650458 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-650458 --subnet=192.168.60.0/24: (36.61305837s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-650458 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-650458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-650458
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-650458: (2.259755007s)
--- PASS: TestKicCustomSubnet (38.91s)

                                                
                                    
x
+
TestKicStaticIP (36.74s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-111051 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-111051 --static-ip=192.168.200.200: (34.352079028s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-111051 ip
helpers_test.go:175: Cleaning up "static-ip-111051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-111051
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-111051: (2.239067827s)
--- PASS: TestKicStaticIP (36.74s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (76.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-777455 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-777455 --driver=docker  --container-runtime=crio: (34.503749646s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-779883 --driver=docker  --container-runtime=crio
E1119 02:35:29.009566 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-779883 --driver=docker  --container-runtime=crio: (36.446210099s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-777455
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-779883
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-779883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-779883
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-779883: (2.060338252s)
helpers_test.go:175: Cleaning up "first-777455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-777455
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-777455: (2.022885539s)
--- PASS: TestMinikubeProfile (76.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-446073 --memory=3072 --mount-string /tmp/TestMountStartserial2884723620/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-446073 --memory=3072 --mount-string /tmp/TestMountStartserial2884723620/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.1657728s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-446073 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-448067 --memory=3072 --mount-string /tmp/TestMountStartserial2884723620/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-448067 --memory=3072 --mount-string /tmp/TestMountStartserial2884723620/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.649329954s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-448067 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-446073 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-446073 --alsologtostderr -v=5: (1.697919378s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-448067 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-448067
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-448067: (1.289754738s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-448067
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-448067: (7.294678339s)
--- PASS: TestMountStart/serial/RestartStopped (8.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-448067 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-727622 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1119 02:37:42.678983 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 02:38:32.082522 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-727622 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m16.994936566s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-727622 -- rollout status deployment/busybox: (3.49630335s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- exec busybox-7b57f96db7-m56g4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- exec busybox-7b57f96db7-q8hpt -- nslookup kubernetes.io
E1119 02:39:05.745721 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- exec busybox-7b57f96db7-m56g4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- exec busybox-7b57f96db7-q8hpt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- exec busybox-7b57f96db7-m56g4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- exec busybox-7b57f96db7-q8hpt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.20s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- exec busybox-7b57f96db7-m56g4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- exec busybox-7b57f96db7-m56g4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- exec busybox-7b57f96db7-q8hpt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727622 -- exec busybox-7b57f96db7-q8hpt -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-727622 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-727622 -v=5 --alsologtostderr: (57.573913583s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.28s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-727622 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 cp testdata/cp-test.txt multinode-727622:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 cp multinode-727622:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3534374194/001/cp-test_multinode-727622.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 cp multinode-727622:/home/docker/cp-test.txt multinode-727622-m02:/home/docker/cp-test_multinode-727622_multinode-727622-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622-m02 "sudo cat /home/docker/cp-test_multinode-727622_multinode-727622-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 cp multinode-727622:/home/docker/cp-test.txt multinode-727622-m03:/home/docker/cp-test_multinode-727622_multinode-727622-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622-m03 "sudo cat /home/docker/cp-test_multinode-727622_multinode-727622-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 cp testdata/cp-test.txt multinode-727622-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 cp multinode-727622-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3534374194/001/cp-test_multinode-727622-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 cp multinode-727622-m02:/home/docker/cp-test.txt multinode-727622:/home/docker/cp-test_multinode-727622-m02_multinode-727622.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622 "sudo cat /home/docker/cp-test_multinode-727622-m02_multinode-727622.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 cp multinode-727622-m02:/home/docker/cp-test.txt multinode-727622-m03:/home/docker/cp-test_multinode-727622-m02_multinode-727622-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622-m03 "sudo cat /home/docker/cp-test_multinode-727622-m02_multinode-727622-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 cp testdata/cp-test.txt multinode-727622-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 cp multinode-727622-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3534374194/001/cp-test_multinode-727622-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 cp multinode-727622-m03:/home/docker/cp-test.txt multinode-727622:/home/docker/cp-test_multinode-727622-m03_multinode-727622.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622 "sudo cat /home/docker/cp-test_multinode-727622-m03_multinode-727622.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 cp multinode-727622-m03:/home/docker/cp-test.txt multinode-727622-m02:/home/docker/cp-test_multinode-727622-m03_multinode-727622-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 ssh -n multinode-727622-m02 "sudo cat /home/docker/cp-test_multinode-727622-m03_multinode-727622-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-727622 node stop m03: (1.316237395s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-727622 status: exit status 7 (521.624614ms)

                                                
                                                
-- stdout --
	multinode-727622
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-727622-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-727622-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-727622 status --alsologtostderr: exit status 7 (534.348989ms)

                                                
                                                
-- stdout --
	multinode-727622
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-727622-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-727622-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:40:18.830104 1571208 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:40:18.830215 1571208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:40:18.830226 1571208 out.go:374] Setting ErrFile to fd 2...
	I1119 02:40:18.830232 1571208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:40:18.830489 1571208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:40:18.830679 1571208 out.go:368] Setting JSON to false
	I1119 02:40:18.830712 1571208 mustload.go:66] Loading cluster: multinode-727622
	I1119 02:40:18.830797 1571208 notify.go:221] Checking for updates...
	I1119 02:40:18.831124 1571208 config.go:182] Loaded profile config "multinode-727622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:40:18.831143 1571208 status.go:174] checking status of multinode-727622 ...
	I1119 02:40:18.831977 1571208 cli_runner.go:164] Run: docker container inspect multinode-727622 --format={{.State.Status}}
	I1119 02:40:18.850424 1571208 status.go:371] multinode-727622 host status = "Running" (err=<nil>)
	I1119 02:40:18.850451 1571208 host.go:66] Checking if "multinode-727622" exists ...
	I1119 02:40:18.850759 1571208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-727622
	I1119 02:40:18.879505 1571208 host.go:66] Checking if "multinode-727622" exists ...
	I1119 02:40:18.879812 1571208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:40:18.879881 1571208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-727622
	I1119 02:40:18.898718 1571208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34749 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/multinode-727622/id_rsa Username:docker}
	I1119 02:40:19.003554 1571208 ssh_runner.go:195] Run: systemctl --version
	I1119 02:40:19.010825 1571208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:40:19.024441 1571208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:40:19.080356 1571208 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-19 02:40:19.070176052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:40:19.080909 1571208 kubeconfig.go:125] found "multinode-727622" server: "https://192.168.67.2:8443"
	I1119 02:40:19.080943 1571208 api_server.go:166] Checking apiserver status ...
	I1119 02:40:19.080991 1571208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 02:40:19.092030 1571208 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1222/cgroup
	I1119 02:40:19.100419 1571208 api_server.go:182] apiserver freezer: "10:freezer:/docker/ea903a0083e50e0785b334a648372925bfb1811d33b6cfa3a9aec24fe22ede6e/crio/crio-f1ff8da61c0e74854db02421ae0bbfba93277d73f7ba2538ac1fd76b7253f6cf"
	I1119 02:40:19.100499 1571208 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ea903a0083e50e0785b334a648372925bfb1811d33b6cfa3a9aec24fe22ede6e/crio/crio-f1ff8da61c0e74854db02421ae0bbfba93277d73f7ba2538ac1fd76b7253f6cf/freezer.state
	I1119 02:40:19.108943 1571208 api_server.go:204] freezer state: "THAWED"
	I1119 02:40:19.108974 1571208 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1119 02:40:19.117769 1571208 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1119 02:40:19.117801 1571208 status.go:463] multinode-727622 apiserver status = Running (err=<nil>)
	I1119 02:40:19.117836 1571208 status.go:176] multinode-727622 status: &{Name:multinode-727622 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:40:19.117869 1571208 status.go:174] checking status of multinode-727622-m02 ...
	I1119 02:40:19.118300 1571208 cli_runner.go:164] Run: docker container inspect multinode-727622-m02 --format={{.State.Status}}
	I1119 02:40:19.135523 1571208 status.go:371] multinode-727622-m02 host status = "Running" (err=<nil>)
	I1119 02:40:19.135544 1571208 host.go:66] Checking if "multinode-727622-m02" exists ...
	I1119 02:40:19.135925 1571208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-727622-m02
	I1119 02:40:19.153863 1571208 host.go:66] Checking if "multinode-727622-m02" exists ...
	I1119 02:40:19.154180 1571208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 02:40:19.154227 1571208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-727622-m02
	I1119 02:40:19.170854 1571208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34754 SSHKeyPath:/home/jenkins/minikube-integration/21924-1463525/.minikube/machines/multinode-727622-m02/id_rsa Username:docker}
	I1119 02:40:19.278867 1571208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 02:40:19.292452 1571208 status.go:176] multinode-727622-m02 status: &{Name:multinode-727622-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:40:19.292483 1571208 status.go:174] checking status of multinode-727622-m03 ...
	I1119 02:40:19.292784 1571208 cli_runner.go:164] Run: docker container inspect multinode-727622-m03 --format={{.State.Status}}
	I1119 02:40:19.311517 1571208 status.go:371] multinode-727622-m03 host status = "Stopped" (err=<nil>)
	I1119 02:40:19.311538 1571208 status.go:384] host is not running, skipping remaining checks
	I1119 02:40:19.311544 1571208 status.go:176] multinode-727622-m03 status: &{Name:multinode-727622-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-727622 node start m03 -v=5 --alsologtostderr: (7.037061789s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-727622
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-727622
E1119 02:40:29.009369 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-727622: (25.067396339s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-727622 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-727622 --wait=true -v=5 --alsologtostderr: (51.587127406s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-727622
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-727622 node delete m03: (5.086060834s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-727622 stop: (23.884070582s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-727622 status: exit status 7 (88.85342ms)

                                                
                                                
-- stdout --
	multinode-727622
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-727622-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-727622 status --alsologtostderr: exit status 7 (92.698905ms)

                                                
                                                
-- stdout --
	multinode-727622
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-727622-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:42:13.724342 1579054 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:42:13.724466 1579054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:42:13.724476 1579054 out.go:374] Setting ErrFile to fd 2...
	I1119 02:42:13.724481 1579054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:42:13.724748 1579054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:42:13.724919 1579054 out.go:368] Setting JSON to false
	I1119 02:42:13.724949 1579054 mustload.go:66] Loading cluster: multinode-727622
	I1119 02:42:13.725374 1579054 config.go:182] Loaded profile config "multinode-727622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:42:13.725392 1579054 status.go:174] checking status of multinode-727622 ...
	I1119 02:42:13.725942 1579054 cli_runner.go:164] Run: docker container inspect multinode-727622 --format={{.State.Status}}
	I1119 02:42:13.726157 1579054 notify.go:221] Checking for updates...
	I1119 02:42:13.744545 1579054 status.go:371] multinode-727622 host status = "Stopped" (err=<nil>)
	I1119 02:42:13.744571 1579054 status.go:384] host is not running, skipping remaining checks
	I1119 02:42:13.744578 1579054 status.go:176] multinode-727622 status: &{Name:multinode-727622 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 02:42:13.744609 1579054 status.go:174] checking status of multinode-727622-m02 ...
	I1119 02:42:13.744901 1579054 cli_runner.go:164] Run: docker container inspect multinode-727622-m02 --format={{.State.Status}}
	I1119 02:42:13.769686 1579054 status.go:371] multinode-727622-m02 host status = "Stopped" (err=<nil>)
	I1119 02:42:13.769707 1579054 status.go:384] host is not running, skipping remaining checks
	I1119 02:42:13.769725 1579054 status.go:176] multinode-727622-m02 status: &{Name:multinode-727622-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-727622 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1119 02:42:42.679873 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-727622 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (54.451631923s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727622 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-727622
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-727622-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-727622-m02 --driver=docker  --container-runtime=crio: exit status 14 (98.435772ms)

                                                
                                                
-- stdout --
	* [multinode-727622-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-727622-m02' is duplicated with machine name 'multinode-727622-m02' in profile 'multinode-727622'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-727622-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-727622-m03 --driver=docker  --container-runtime=crio: (31.384432329s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-727622
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-727622: exit status 80 (339.93532ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-727622 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-727622-m03 already exists in multinode-727622-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-727622-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-727622-m03: (2.09139047s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.97s)

                                                
                                    
x
+
TestPreload (150.26s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-698198 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-698198 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m4.740416807s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-698198 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-698198 image pull gcr.io/k8s-minikube/busybox: (2.15258739s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-698198
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-698198: (6.299468902s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-698198 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1119 02:45:29.009599 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-698198 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m14.412714246s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-698198 image list
helpers_test.go:175: Cleaning up "test-preload-698198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-698198
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-698198: (2.416085331s)
--- PASS: TestPreload (150.26s)

                                                
                                    
x
+
TestScheduledStopUnix (110.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-927988 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-927988 --memory=3072 --driver=docker  --container-runtime=crio: (34.062599066s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-927988 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 02:46:51.711086 1593044 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:46:51.711274 1593044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:46:51.711301 1593044 out.go:374] Setting ErrFile to fd 2...
	I1119 02:46:51.711319 1593044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:46:51.711602 1593044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:46:51.711928 1593044 out.go:368] Setting JSON to false
	I1119 02:46:51.712098 1593044 mustload.go:66] Loading cluster: scheduled-stop-927988
	I1119 02:46:51.712483 1593044 config.go:182] Loaded profile config "scheduled-stop-927988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:46:51.712599 1593044 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/config.json ...
	I1119 02:46:51.712814 1593044 mustload.go:66] Loading cluster: scheduled-stop-927988
	I1119 02:46:51.712976 1593044 config.go:182] Loaded profile config "scheduled-stop-927988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-927988 -n scheduled-stop-927988
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-927988 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 02:46:52.182775 1593130 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:46:52.182965 1593130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:46:52.182992 1593130 out.go:374] Setting ErrFile to fd 2...
	I1119 02:46:52.183011 1593130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:46:52.183306 1593130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:46:52.183604 1593130 out.go:368] Setting JSON to false
	I1119 02:46:52.184508 1593130 daemonize_unix.go:73] killing process 1593060 as it is an old scheduled stop
	I1119 02:46:52.185629 1593130 mustload.go:66] Loading cluster: scheduled-stop-927988
	I1119 02:46:52.186120 1593130 config.go:182] Loaded profile config "scheduled-stop-927988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:46:52.186199 1593130 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/config.json ...
	I1119 02:46:52.186379 1593130 mustload.go:66] Loading cluster: scheduled-stop-927988
	I1119 02:46:52.186485 1593130 config.go:182] Loaded profile config "scheduled-stop-927988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1119 02:46:52.192930 1465377 retry.go:31] will retry after 149.186µs: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.196352 1465377 retry.go:31] will retry after 210.372µs: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.197539 1465377 retry.go:31] will retry after 245.236µs: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.198679 1465377 retry.go:31] will retry after 318.551µs: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.199854 1465377 retry.go:31] will retry after 681.742µs: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.200989 1465377 retry.go:31] will retry after 1.132318ms: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.203188 1465377 retry.go:31] will retry after 760.979µs: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.204307 1465377 retry.go:31] will retry after 1.334938ms: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.206486 1465377 retry.go:31] will retry after 2.737949ms: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.209692 1465377 retry.go:31] will retry after 3.004098ms: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.212855 1465377 retry.go:31] will retry after 4.275078ms: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.218093 1465377 retry.go:31] will retry after 10.940798ms: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.229609 1465377 retry.go:31] will retry after 8.535687ms: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.238823 1465377 retry.go:31] will retry after 26.862587ms: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.266049 1465377 retry.go:31] will retry after 16.978613ms: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
I1119 02:46:52.283290 1465377 retry.go:31] will retry after 40.54353ms: open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-927988 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-927988 -n scheduled-stop-927988
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-927988
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-927988 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 02:47:18.131012 1593493 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:47:18.131284 1593493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:47:18.131318 1593493 out.go:374] Setting ErrFile to fd 2...
	I1119 02:47:18.131337 1593493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:47:18.131718 1593493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:47:18.132045 1593493 out.go:368] Setting JSON to false
	I1119 02:47:18.132189 1593493 mustload.go:66] Loading cluster: scheduled-stop-927988
	I1119 02:47:18.132633 1593493 config.go:182] Loaded profile config "scheduled-stop-927988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:47:18.132750 1593493 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/scheduled-stop-927988/config.json ...
	I1119 02:47:18.132979 1593493 mustload.go:66] Loading cluster: scheduled-stop-927988
	I1119 02:47:18.133141 1593493 config.go:182] Loaded profile config "scheduled-stop-927988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1119 02:47:42.679546 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-927988
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-927988: exit status 7 (68.639196ms)

                                                
                                                
-- stdout --
	scheduled-stop-927988
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-927988 -n scheduled-stop-927988
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-927988 -n scheduled-stop-927988: exit status 7 (68.501182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-927988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-927988
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-927988: (4.770662739s)
--- PASS: TestScheduledStopUnix (110.46s)

                                                
                                    
x
+
TestInsufficientStorage (14.57s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-144983 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-144983 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (12.025058911s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fd479df4-920a-442c-9845-8c4bd397fdd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-144983] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"88bdbe32-f025-4d22-a8b6-bbd345b1b23b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21924"}}
	{"specversion":"1.0","id":"47981842-dcdf-44a1-9594-994c55342b00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f6ce9c50-f959-4d1c-b83b-85446f615bc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig"}}
	{"specversion":"1.0","id":"f9fcd37f-52f5-465c-9270-1d8370384262","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube"}}
	{"specversion":"1.0","id":"34c63e16-eba3-47f1-9404-5966ca40b365","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9b0a2b78-9d82-459e-9b36-d2164d21fd6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"37df7660-0da3-41ea-b24b-b54aba925b86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5817011b-ef33-4de9-ad65-8342e68bc1c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2bdf38f9-b80b-489c-8561-9184ddc82c67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1c613b1-a978-415e-b330-043ab45b51c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e9fe20c7-5701-4311-b748-113042e36e3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-144983\" primary control-plane node in \"insufficient-storage-144983\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"643cbbea-d368-42eb-8cdb-8091fccd2b9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763507788-21924 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a3913c12-844c-44ab-9117-a08f8ef1a382","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e2363a0-934d-4c2c-a22a-dd15b638ec81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-144983 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-144983 --output=json --layout=cluster: exit status 7 (310.982456ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-144983","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-144983","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 02:48:20.371282 1595203 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-144983" does not appear in /home/jenkins/minikube-integration/21924-1463525/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-144983 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-144983 --output=json --layout=cluster: exit status 7 (288.121511ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-144983","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-144983","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 02:48:20.660178 1595272 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-144983" does not appear in /home/jenkins/minikube-integration/21924-1463525/kubeconfig
	E1119 02:48:20.669625 1595272 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/insufficient-storage-144983/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-144983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-144983
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-144983: (1.941940395s)
--- PASS: TestInsufficientStorage (14.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (58.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3454767519 start -p running-upgrade-422316 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3454767519 start -p running-upgrade-422316 --memory=3072 --vm-driver=docker  --container-runtime=crio: (29.581714181s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-422316 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-422316 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.194753644s)
helpers_test.go:175: Cleaning up "running-upgrade-422316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-422316
E1119 02:52:42.679705 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-422316: (1.970836044s)
--- PASS: TestRunningBinaryUpgrade (58.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (357.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-315505 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-315505 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.769883906s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-315505
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-315505: (1.533189705s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-315505 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-315505 status --format={{.Host}}: exit status 7 (91.887464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-315505 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1119 02:50:29.009707 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-315505 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m39.207233338s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-315505 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-315505 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-315505 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (121.890453ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-315505] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-315505
	    minikube start -p kubernetes-upgrade-315505 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3155052 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-315505 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-315505 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1119 02:55:12.084298 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-315505 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.005459153s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-315505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-315505
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-315505: (2.27961835s)
--- PASS: TestKubernetesUpgrade (357.11s)

                                                
                                    
x
+
TestMissingContainerUpgrade (137.67s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2958426568 start -p missing-upgrade-794811 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2958426568 start -p missing-upgrade-794811 --memory=3072 --driver=docker  --container-runtime=crio: (1m13.141796592s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-794811
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-794811: (2.503987265s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-794811
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-794811 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-794811 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.514663049s)
helpers_test.go:175: Cleaning up "missing-upgrade-794811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-794811
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-794811: (2.862381384s)
--- PASS: TestMissingContainerUpgrade (137.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-841094 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-841094 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (97.203594ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-841094] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-841094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-841094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.727491656s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-841094 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-841094 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-841094 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.082452577s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-841094 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-841094 status -o json: exit status 2 (519.423581ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-841094","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-841094
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-841094: (2.519142726s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-841094 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-841094 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.496743312s)
--- PASS: TestNoKubernetes/serial/Start (9.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21924-1463525/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-841094 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-841094 "sudo systemctl is-active --quiet service kubelet": exit status 1 (333.1262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-841094
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-841094: (1.378851025s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-841094 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-841094 --driver=docker  --container-runtime=crio: (8.478398914s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-841094 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-841094 "sudo systemctl is-active --quiet service kubelet": exit status 1 (380.000356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (8.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (8.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (53.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.99578629 start -p stopped-upgrade-245523 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.99578629 start -p stopped-upgrade-245523 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.679763709s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.99578629 -p stopped-upgrade-245523 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.99578629 -p stopped-upgrade-245523 stop: (1.284781296s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-245523 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-245523 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.796422543s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (53.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-245523
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-245523: (1.20434023s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestPause/serial/Start (82.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-210634 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-210634 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.927195955s)
--- PASS: TestPause/serial/Start (82.93s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.38s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-210634 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-210634 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.356127208s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-889743 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-889743 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (189.474305ms)

                                                
                                                
-- stdout --
	* [false-889743] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21924
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 02:55:24.310171 1633220 out.go:360] Setting OutFile to fd 1 ...
	I1119 02:55:24.310360 1633220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:55:24.310397 1633220 out.go:374] Setting ErrFile to fd 2...
	I1119 02:55:24.310417 1633220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 02:55:24.310705 1633220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-1463525/.minikube/bin
	I1119 02:55:24.311149 1633220 out.go:368] Setting JSON to false
	I1119 02:55:24.312146 1633220 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38252,"bootTime":1763482673,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1119 02:55:24.312248 1633220 start.go:143] virtualization:  
	I1119 02:55:24.315772 1633220 out.go:179] * [false-889743] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1119 02:55:24.319534 1633220 out.go:179]   - MINIKUBE_LOCATION=21924
	I1119 02:55:24.319662 1633220 notify.go:221] Checking for updates...
	I1119 02:55:24.325381 1633220 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 02:55:24.328398 1633220 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21924-1463525/kubeconfig
	I1119 02:55:24.331330 1633220 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-1463525/.minikube
	I1119 02:55:24.334205 1633220 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1119 02:55:24.337015 1633220 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 02:55:24.340488 1633220 config.go:182] Loaded profile config "kubernetes-upgrade-315505": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 02:55:24.340625 1633220 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 02:55:24.365565 1633220 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1119 02:55:24.365695 1633220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 02:55:24.429012 1633220 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-19 02:55:24.41704747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1119 02:55:24.429119 1633220 docker.go:319] overlay module found
	I1119 02:55:24.432204 1633220 out.go:179] * Using the docker driver based on user configuration
	I1119 02:55:24.435123 1633220 start.go:309] selected driver: docker
	I1119 02:55:24.435145 1633220 start.go:930] validating driver "docker" against <nil>
	I1119 02:55:24.435160 1633220 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 02:55:24.438644 1633220 out.go:203] 
	W1119 02:55:24.441477 1633220 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1119 02:55:24.444306 1633220 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-889743 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-889743

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-889743

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-889743

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-889743

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-889743

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-889743

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-889743

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-889743

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-889743

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-889743

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-889743

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-889743" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-889743" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:55:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-315505
contexts:
- context:
cluster: kubernetes-upgrade-315505
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:55:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-315505
name: kubernetes-upgrade-315505
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-315505
user:
client-certificate: /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/kubernetes-upgrade-315505/client.crt
client-key: /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/kubernetes-upgrade-315505/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-889743

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889743"

                                                
                                                
----------------------- debugLogs end: false-889743 [took: 3.670549024s] --------------------------------
helpers_test.go:175: Cleaning up "false-889743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-889743
--- PASS: TestNetworkPlugins/group/false (4.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (64.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1119 02:57:42.679617 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/functional-132054/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m4.114355912s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (64.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-525469 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f6d2d599-e7e9-4681-a2aa-6c721027af44] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f6d2d599-e7e9-4681-a2aa-6c721027af44] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00333507s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-525469 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-525469 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-525469 --alsologtostderr -v=3: (11.999161433s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-525469 -n old-k8s-version-525469
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-525469 -n old-k8s-version-525469: exit status 7 (70.019797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-525469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-525469 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.000230602s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-525469 -n old-k8s-version-525469
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vnbjk" [e30d552f-4050-41bf-b875-0c95fae03973] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00318771s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vnbjk" [e30d552f-4050-41bf-b875-0c95fae03973] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003867581s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-525469 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-525469 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m31.56535516s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1119 03:00:29.009436 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.022573107s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-579203 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e24610f2-fbb3-428c-b4a9-925911a13a98] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e24610f2-fbb3-428c-b4a9-925911a13a98] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004419263s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-579203 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-592123 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1bb0ae41-6818-4b9f-bacc-21d0feb4f909] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1bb0ae41-6818-4b9f-bacc-21d0feb4f909] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003995041s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-592123 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-579203 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-579203 --alsologtostderr -v=3: (11.988970148s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-592123 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-592123 --alsologtostderr -v=3: (12.008184624s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-579203 -n default-k8s-diff-port-579203
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-579203 -n default-k8s-diff-port-579203: exit status 7 (71.361314ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-579203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-579203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.732194911s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-579203 -n default-k8s-diff-port-579203
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-592123 -n embed-certs-592123
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-592123 -n embed-certs-592123: exit status 7 (97.551014ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-592123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-592123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.104426434s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-592123 -n embed-certs-592123
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7sz62" [2e8cb514-f8db-4efe-8e51-f6de4fd4b53f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002874722s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7sz62" [2e8cb514-f8db-4efe-8e51-f6de4fd4b53f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003728725s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-579203 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-579203 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-76f6n" [d7ebcebd-3f82-4d27-8b51-e33625e09608] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003296514s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-76f6n" [d7ebcebd-3f82-4d27-8b51-e33625e09608] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003715531s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-592123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m20.994923835s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (81.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-592123 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1119 03:02:55.384405 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:02:55.390782 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:02:55.402146 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:02:55.423511 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:02:55.464867 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:02:55.546293 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:02:55.707791 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:02:56.029438 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:02:56.671455 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:02:57.953051 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:03:00.515888 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:03:05.638032 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:03:15.879773 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:03:36.361076 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.951005195s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-886248 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-886248 --alsologtostderr -v=3: (1.426035275s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-886248 -n newest-cni-886248
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-886248 -n newest-cni-886248: exit status 7 (92.332074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-886248 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-886248 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (14.95295856s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-886248 -n newest-cni-886248
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-800908 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [17120236-6096-4228-9230-9e5ac80c0aaf] Pending
helpers_test.go:352: "busybox" [17120236-6096-4228-9230-9e5ac80c0aaf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [17120236-6096-4228-9230-9e5ac80c0aaf] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004585909s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-800908 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-886248 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-800908 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-800908 --alsologtostderr -v=3: (12.2235725s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1119 03:04:17.322605 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m28.87071346s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-800908 -n no-preload-800908
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-800908 -n no-preload-800908: exit status 7 (90.158416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-800908 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-800908 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.125157769s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-800908 -n no-preload-800908
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kwdms" [c2f26d02-e618-4b0f-9089-8c76b6e21ca7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002861172s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kwdms" [c2f26d02-e618-4b0f-9089-8c76b6e21ca7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003620695s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-800908 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-800908 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m26.496545782s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-889743 "pgrep -a kubelet"
E1119 03:05:39.244279 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1119 03:05:39.246901 1465377 config.go:182] Loaded profile config "auto-889743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-889743 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-stvqt" [f6d8c87f-8559-4ff8-9a3f-6127fecf6438] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-stvqt" [f6d8c87f-8559-4ff8-9a3f-6127fecf6438] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00425739s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-889743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1119 03:06:21.133607 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:06:41.615376 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.059340151s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-xsn5k" [62f2dc66-96a1-4700-a04d-4f30ae13d06f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005924333s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-889743 "pgrep -a kubelet"
I1119 03:07:10.483965 1465377 config.go:182] Loaded profile config "kindnet-889743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-889743 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2xmvc" [8cdf1a4f-9d06-4130-9d3f-37064ab95913] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2xmvc" [8cdf1a4f-9d06-4130-9d3f-37064ab95913] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003619862s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-l7g74" [f924864c-dd1f-4329-a97b-f5a4ff52fb17] Running
E1119 03:07:22.576941 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005266147s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-889743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-889743 "pgrep -a kubelet"
I1119 03:07:24.013875 1465377 config.go:182] Loaded profile config "calico-889743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-889743 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x226v" [d2e06b92-f8d8-4432-808b-d392b268d95a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x226v" [d2e06b92-f8d8-4432-808b-d392b268d95a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003699294s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-889743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1119 03:07:55.384089 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m3.264304298s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1119 03:08:23.085565 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/old-k8s-version-525469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:08:44.498623 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/default-k8s-diff-port-579203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m18.193608444s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-889743 "pgrep -a kubelet"
I1119 03:08:49.674439 1465377 config.go:182] Loaded profile config "custom-flannel-889743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-889743 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tft97" [5abb693e-3445-4a5d-917f-9bc18f50e672] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tft97" [5abb693e-3445-4a5d-917f-9bc18f50e672] Running
E1119 03:08:56.644144 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:08:56.650533 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:08:56.662013 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:08:56.683361 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:08:56.724725 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:08:56.806178 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:08:56.967774 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:08:57.289494 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:08:57.931326 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:08:59.213397 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003769742s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-889743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-889743 "pgrep -a kubelet"
I1119 03:09:19.125908 1465377 config.go:182] Loaded profile config "enable-default-cni-889743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-889743 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g5nhp" [bb000f01-d3de-4b53-bc84-52116afe136d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g5nhp" [bb000f01-d3de-4b53-bc84-52116afe136d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003649929s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m6.82411045s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-889743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (52.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1119 03:10:18.585283 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/no-preload-800908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:10:29.009956 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-889743 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (52.142320567s)
--- PASS: TestNetworkPlugins/group/bridge/Start (52.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-bx2z7" [a9d82c57-b1a6-4117-a235-77563cded5ed] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.002777492s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-889743 "pgrep -a kubelet"
I1119 03:10:35.984098 1465377 config.go:182] Loaded profile config "flannel-889743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-889743 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5m697" [b6714a96-5d52-4922-a091-8bb9cbd8a5cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1119 03:10:39.589020 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:10:39.595742 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:10:39.607309 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:10:39.628880 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:10:39.670215 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:10:39.751541 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:10:39.913169 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:10:40.234695 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:10:40.876878 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-5m697" [b6714a96-5d52-4922-a091-8bb9cbd8a5cd] Running
E1119 03:10:42.158864 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 03:10:44.721134 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003476688s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-889743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-889743 "pgrep -a kubelet"
I1119 03:10:48.538146 1465377 config.go:182] Loaded profile config "bridge-889743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-889743 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-76twq" [32c2d069-b567-4c2f-a9dc-b13e742751d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1119 03:10:49.842875 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/auto-889743/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-76twq" [32c2d069-b567-4c2f-a9dc-b13e742751d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004334398s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-889743 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-889743 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-772744 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-772744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-772744
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-722439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-722439
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-889743 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-889743

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-889743

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-889743

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-889743

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-889743

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-889743

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-889743

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-889743

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-889743

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-889743

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-889743

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-889743" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-889743" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:55:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-315505
contexts:
- context:
cluster: kubernetes-upgrade-315505
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:55:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-315505
name: kubernetes-upgrade-315505
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-315505
user:
client-certificate: /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/kubernetes-upgrade-315505/client.crt
client-key: /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/kubernetes-upgrade-315505/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-889743

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889743"

                                                
                                                
----------------------- debugLogs end: kubenet-889743 [took: 3.557721731s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-889743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-889743
--- SKIP: TestNetworkPlugins/group/kubenet (3.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1119 02:55:29.010054 1465377 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/addons-238225/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:636: 
----------------------- debugLogs start: cilium-889743 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-889743" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21924-1463525/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:55:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-315505
contexts:
- context:
cluster: kubernetes-upgrade-315505
extensions:
- extension:
last-update: Wed, 19 Nov 2025 02:55:05 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-315505
name: kubernetes-upgrade-315505
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-315505
user:
client-certificate: /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/kubernetes-upgrade-315505/client.crt
client-key: /home/jenkins/minikube-integration/21924-1463525/.minikube/profiles/kubernetes-upgrade-315505/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-889743

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-889743" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889743"

                                                
                                                
----------------------- debugLogs end: cilium-889743 [took: 5.060473208s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-889743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-889743
--- SKIP: TestNetworkPlugins/group/cilium (5.28s)

                                                
                                    
Copied to clipboard